Test Report: Docker_Linux_crio_arm64 22054

                    
                      83cf6fd59e5d8f3d63346b28bfbd6fd8e1f567be:2025-12-08:42677
                    
                

Test fail (55/364)

Order failed test Duration
38 TestAddons/serial/Volcano 0.31
44 TestAddons/parallel/Registry 20.97
45 TestAddons/parallel/RegistryCreds 0.52
46 TestAddons/parallel/Ingress 144.42
47 TestAddons/parallel/InspektorGadget 6.27
48 TestAddons/parallel/MetricsServer 5.38
50 TestAddons/parallel/CSI 51.22
51 TestAddons/parallel/Headlamp 3.27
52 TestAddons/parallel/CloudSpanner 6.33
53 TestAddons/parallel/LocalPath 8.83
54 TestAddons/parallel/NvidiaDevicePlugin 5.3
55 TestAddons/parallel/Yakd 6.28
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.63
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.61
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.43
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.62
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.46
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.62
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.16
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.74
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.11
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.43
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.7
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.43
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 103.07
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.29
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.3
293 TestJSONOutput/pause/Command 2.42
299 TestJSONOutput/unpause/Command 1.88
358 TestKubernetesUpgrade 791.33
384 TestPause/serial/Pause 6.16
399 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.6
406 TestStartStop/group/old-k8s-version/serial/Pause 8.69
408 TestStartStop/group/no-preload/serial/FirstStart 518.89
412 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.47
419 TestStartStop/group/embed-certs/serial/Pause 6.08
423 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.58
430 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.15
432 TestStartStop/group/newest-cni/serial/FirstStart 502.73
433 TestStartStop/group/no-preload/serial/DeployApp 2.99
434 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 96.49
437 TestStartStop/group/no-preload/serial/SecondStart 370.71
439 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 107.38
442 TestStartStop/group/newest-cni/serial/SecondStart 375.26
443 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.91
447 TestStartStop/group/newest-cni/serial/Pause 9.89
455 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.109
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable volcano --alsologtostderr -v=1: exit status 11 (311.646624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:15:33.601008  798676 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:15:33.601905  798676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:33.601922  798676 out.go:374] Setting ErrFile to fd 2...
	I1208 00:15:33.601928  798676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:33.602202  798676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:15:33.602504  798676 mustload.go:66] Loading cluster: addons-429840
	I1208 00:15:33.602968  798676 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:33.602990  798676 addons.go:622] checking whether the cluster is paused
	I1208 00:15:33.603106  798676 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:33.603122  798676 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:15:33.603639  798676 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:15:33.642041  798676 ssh_runner.go:195] Run: systemctl --version
	I1208 00:15:33.642118  798676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:15:33.659393  798676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:15:33.769477  798676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:15:33.769567  798676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:15:33.804511  798676 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:15:33.804546  798676 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:15:33.804552  798676 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:15:33.804556  798676 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:15:33.804560  798676 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:15:33.804563  798676 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:15:33.804586  798676 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:15:33.804590  798676 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:15:33.804593  798676 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:15:33.804604  798676 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:15:33.804608  798676 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:15:33.804612  798676 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:15:33.804615  798676 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:15:33.804618  798676 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:15:33.804622  798676 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:15:33.804632  798676 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:15:33.804641  798676 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:15:33.804646  798676 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:15:33.804661  798676 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:15:33.804664  798676 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:15:33.804670  798676 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:15:33.804673  798676 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:15:33.804676  798676 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:15:33.804679  798676 cri.go:89] found id: ""
	I1208 00:15:33.804746  798676 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:15:33.820158  798676 out.go:203] 
	W1208 00:15:33.823166  798676 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:15:33.823196  798676 out.go:285] * 
	* 
	W1208 00:15:33.829655  798676 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:15:33.832832  798676 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.558464ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003154617s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.007018765s
addons_test.go:392: (dbg) Run:  kubectl --context addons-429840 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-429840 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-429840 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.36657796s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 ip
2025/12/08 00:16:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable registry --alsologtostderr -v=1: exit status 11 (308.259532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:05.957555  799777 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:05.958506  799777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:05.958560  799777 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:05.958584  799777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:05.958934  799777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:05.959272  799777 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:05.959694  799777 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:05.959740  799777 addons.go:622] checking whether the cluster is paused
	I1208 00:16:05.959878  799777 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:05.959916  799777 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:05.960464  799777 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:05.977653  799777 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:05.977713  799777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:05.995895  799777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:06.109654  799777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:06.109747  799777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:06.151532  799777 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:06.151608  799777 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:06.151627  799777 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:06.151646  799777 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:06.151675  799777 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:06.151698  799777 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:06.151722  799777 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:06.151739  799777 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:06.151756  799777 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:06.151783  799777 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:06.151806  799777 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:06.151824  799777 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:06.151843  799777 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:06.151860  799777 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:06.151886  799777 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:06.151910  799777 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:06.151936  799777 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:06.151954  799777 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:06.151972  799777 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:06.152004  799777 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:06.152025  799777 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:06.152042  799777 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:06.152060  799777 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:06.152087  799777 cri.go:89] found id: ""
	I1208 00:16:06.152167  799777 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:06.187038  799777 out.go:203] 
	W1208 00:16:06.189878  799777 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:06.189909  799777 out.go:285] * 
	* 
	W1208 00:16:06.196317  799777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:06.199123  799777 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (20.97s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.846165ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-429840
addons_test.go:332: (dbg) Run:  kubectl --context addons-429840 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (293.916004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:39.990183  800757 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:39.991001  800757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.991052  800757 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:39.991075  800757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.992574  800757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:39.992903  800757 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:39.993289  800757 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.993307  800757 addons.go:622] checking whether the cluster is paused
	I1208 00:16:39.993418  800757 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.993434  800757 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:39.993942  800757 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:40.027569  800757 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:40.027631  800757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:40.064751  800757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:40.169548  800757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:40.169676  800757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:40.206919  800757 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:40.206944  800757 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:40.206960  800757 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:40.206964  800757 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:40.206992  800757 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:40.206997  800757 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:40.207001  800757 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:40.207004  800757 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:40.207007  800757 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:40.207039  800757 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:40.207051  800757 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:40.207055  800757 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:40.207058  800757 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:40.207078  800757 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:40.207088  800757 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:40.207099  800757 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:40.207132  800757 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:40.207137  800757 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:40.207157  800757 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:40.207167  800757 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:40.207195  800757 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:40.207204  800757 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:40.207207  800757 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:40.207211  800757 cri.go:89] found id: ""
	I1208 00:16:40.207271  800757 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:40.222168  800757 out.go:203] 
	W1208 00:16:40.225089  800757 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:40.225119  800757 out.go:285] * 
	* 
	W1208 00:16:40.231438  800757 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:40.234549  800757 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-429840 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-429840 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-429840 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [552138fb-b38a-4a6d-85b5-79bfb8cb2d22] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [552138fb-b38a-4a6d-85b5-79bfb8cb2d22] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004173899s
I1208 00:16:27.524484  791807 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.617828227s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-429840 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-429840
helpers_test.go:243: (dbg) docker inspect addons-429840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e",
	        "Created": "2025-12-08T00:13:22.039633847Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 793218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:13:22.099748278Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e-json.log",
	        "Name": "/addons-429840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-429840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-429840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e",
	                "LowerDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-429840",
	                "Source": "/var/lib/docker/volumes/addons-429840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-429840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-429840",
	                "name.minikube.sigs.k8s.io": "addons-429840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5e6bd5e9d9ab86569e41be1f9f0db050fe640dc268b6fe00540a5eeb375bd69",
	            "SandboxKey": "/var/run/docker/netns/e5e6bd5e9d9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-429840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:87:c5:f5:67:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9d5a0c597ead7d779322ea2df113cf05b50efef1f467d1495dcf34843407b4d",
	                    "EndpointID": "b44c7109c4ba972bab0ade4dd76749da0756227a4eaaf162adcd0969bb8947c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-429840",
	                        "4788dff0a9c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-429840 -n addons-429840
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-429840 logs -n 25: (1.504625986s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-748036                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-748036 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ --download-only -p binary-mirror-361883 --alsologtostderr --binary-mirror http://127.0.0.1:39527 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-361883   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ -p binary-mirror-361883                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-361883   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ addons  │ disable dashboard -p addons-429840                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-429840                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ start   │ -p addons-429840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:15 UTC │
	│ addons  │ addons-429840 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ addons  │ addons-429840 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-429840 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ addons  │ addons-429840 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ ip      │ addons-429840 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │ 08 Dec 25 00:16 UTC │
	│ addons  │ addons-429840 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ ssh     │ addons-429840 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-429840                                                                                                                                                                                                                                                                                                                                                                                           │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │ 08 Dec 25 00:16 UTC │
	│ addons  │ addons-429840 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ addons  │ addons-429840 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │                     │
	│ ssh     │ addons-429840 ssh cat /opt/local-path-provisioner/pvc-12f50409-998e-4c25-af97-bb31f5aacd15_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:16 UTC │ 08 Dec 25 00:16 UTC │
	│ addons  │ addons-429840 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:17 UTC │                     │
	│ addons  │ addons-429840 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:17 UTC │                     │
	│ ip      │ addons-429840 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:18 UTC │ 08 Dec 25 00:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:58
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:58.086654  792815 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:58.086829  792815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:58.086886  792815 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:58.086900  792815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:58.087178  792815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:12:58.087689  792815 out.go:368] Setting JSON to false
	I1208 00:12:58.088649  792815 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17710,"bootTime":1765135068,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:12:58.088719  792815 start.go:143] virtualization:  
	I1208 00:12:58.092166  792815 out.go:179] * [addons-429840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:12:58.095237  792815 notify.go:221] Checking for updates...
	I1208 00:12:58.095804  792815 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:12:58.098982  792815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:58.102050  792815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:12:58.104917  792815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:12:58.107706  792815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:12:58.110497  792815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:12:58.113780  792815 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:58.139785  792815 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:58.139908  792815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:58.198511  792815 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:58.189243732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:58.198617  792815 docker.go:319] overlay module found
	I1208 00:12:58.201653  792815 out.go:179] * Using the docker driver based on user configuration
	I1208 00:12:58.204469  792815 start.go:309] selected driver: docker
	I1208 00:12:58.204491  792815 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:58.204505  792815 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:12:58.205259  792815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:58.269254  792815 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:58.260156382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:58.269424  792815 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:58.269652  792815 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:12:58.272550  792815 out.go:179] * Using Docker driver with root privileges
	I1208 00:12:58.275291  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:12:58.275365  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:12:58.275378  792815 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:12:58.275459  792815 start.go:353] cluster config:
	{Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1208 00:12:58.278490  792815 out.go:179] * Starting "addons-429840" primary control-plane node in "addons-429840" cluster
	I1208 00:12:58.281247  792815 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:12:58.284050  792815 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:12:58.286904  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:12:58.286952  792815 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 00:12:58.286966  792815 cache.go:65] Caching tarball of preloaded images
	I1208 00:12:58.286975  792815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:12:58.287058  792815 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:12:58.287069  792815 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 00:12:58.287456  792815 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json ...
	I1208 00:12:58.287488  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json: {Name:mkdd8650adb0bf4e186015e5cc2e904609ad2ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:12:58.302295  792815 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:58.302421  792815 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:12:58.302440  792815 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1208 00:12:58.302444  792815 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1208 00:12:58.302451  792815 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1208 00:12:58.302455  792815 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1208 00:13:16.314475  792815 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1208 00:13:16.314515  792815 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:13:16.314558  792815 start.go:360] acquireMachinesLock for addons-429840: {Name:mk6b903fc45d259c022d88310f1d219bc2e845f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:13:16.314701  792815 start.go:364] duration metric: took 118.672µs to acquireMachinesLock for "addons-429840"
	I1208 00:13:16.314731  792815 start.go:93] Provisioning new machine with config: &{Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:13:16.314800  792815 start.go:125] createHost starting for "" (driver="docker")
	I1208 00:13:16.318305  792815 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1208 00:13:16.318555  792815 start.go:159] libmachine.API.Create for "addons-429840" (driver="docker")
	I1208 00:13:16.318592  792815 client.go:173] LocalClient.Create starting
	I1208 00:13:16.318703  792815 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 00:13:16.477433  792815 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 00:13:16.795524  792815 cli_runner.go:164] Run: docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 00:13:16.810865  792815 cli_runner.go:211] docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 00:13:16.810967  792815 network_create.go:284] running [docker network inspect addons-429840] to gather additional debugging logs...
	I1208 00:13:16.810988  792815 cli_runner.go:164] Run: docker network inspect addons-429840
	W1208 00:13:16.826292  792815 cli_runner.go:211] docker network inspect addons-429840 returned with exit code 1
	I1208 00:13:16.826333  792815 network_create.go:287] error running [docker network inspect addons-429840]: docker network inspect addons-429840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-429840 not found
	I1208 00:13:16.826348  792815 network_create.go:289] output of [docker network inspect addons-429840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-429840 not found
	
	** /stderr **
	I1208 00:13:16.826455  792815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:13:16.844630  792815 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aa6a50}
	I1208 00:13:16.844678  792815 network_create.go:124] attempt to create docker network addons-429840 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 00:13:16.844742  792815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-429840 addons-429840
	I1208 00:13:16.904103  792815 network_create.go:108] docker network addons-429840 192.168.49.0/24 created
	I1208 00:13:16.904136  792815 kic.go:121] calculated static IP "192.168.49.2" for the "addons-429840" container
	I1208 00:13:16.904226  792815 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 00:13:16.919437  792815 cli_runner.go:164] Run: docker volume create addons-429840 --label name.minikube.sigs.k8s.io=addons-429840 --label created_by.minikube.sigs.k8s.io=true
	I1208 00:13:16.936341  792815 oci.go:103] Successfully created a docker volume addons-429840
	I1208 00:13:16.936432  792815 cli_runner.go:164] Run: docker run --rm --name addons-429840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --entrypoint /usr/bin/test -v addons-429840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 00:13:17.981899  792815 cli_runner.go:217] Completed: docker run --rm --name addons-429840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --entrypoint /usr/bin/test -v addons-429840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.045390598s)
	I1208 00:13:17.981932  792815 oci.go:107] Successfully prepared a docker volume addons-429840
	I1208 00:13:17.981981  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:13:17.982003  792815 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 00:13:17.982092  792815 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-429840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 00:13:21.964017  792815 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-429840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.981886472s)
	I1208 00:13:21.964051  792815 kic.go:203] duration metric: took 3.982045153s to extract preloaded images to volume ...
	W1208 00:13:21.964202  792815 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 00:13:21.964301  792815 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 00:13:22.024145  792815 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-429840 --name addons-429840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-429840 --network addons-429840 --ip 192.168.49.2 --volume addons-429840:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 00:13:22.345963  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Running}}
	I1208 00:13:22.369013  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.395290  792815 cli_runner.go:164] Run: docker exec addons-429840 stat /var/lib/dpkg/alternatives/iptables
	I1208 00:13:22.444981  792815 oci.go:144] the created container "addons-429840" has a running status.
	I1208 00:13:22.445015  792815 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa...
	I1208 00:13:22.585788  792815 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 00:13:22.607315  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.629341  792815 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 00:13:22.629365  792815 kic_runner.go:114] Args: [docker exec --privileged addons-429840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 00:13:22.702326  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.721610  792815 machine.go:94] provisionDockerMachine start ...
	I1208 00:13:22.721714  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:22.739485  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:22.739811  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:22.739825  792815 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:13:22.740529  792815 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47132->127.0.0.1:33493: read: connection reset by peer
	I1208 00:13:25.890368  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-429840
	
	I1208 00:13:25.890391  792815 ubuntu.go:182] provisioning hostname "addons-429840"
	I1208 00:13:25.890457  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:25.907496  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:25.907835  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:25.907853  792815 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-429840 && echo "addons-429840" | sudo tee /etc/hostname
	I1208 00:13:26.068552  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-429840
	
	I1208 00:13:26.068640  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.086659  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:26.087062  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:26.087083  792815 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-429840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-429840/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-429840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:13:26.238951  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:13:26.238980  792815 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:13:26.239013  792815 ubuntu.go:190] setting up certificates
	I1208 00:13:26.239027  792815 provision.go:84] configureAuth start
	I1208 00:13:26.239100  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:26.255450  792815 provision.go:143] copyHostCerts
	I1208 00:13:26.255533  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:13:26.255667  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:13:26.255727  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:13:26.255778  792815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.addons-429840 san=[127.0.0.1 192.168.49.2 addons-429840 localhost minikube]
	I1208 00:13:26.365519  792815 provision.go:177] copyRemoteCerts
	I1208 00:13:26.365595  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:13:26.365639  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.381644  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:26.486543  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:13:26.504160  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 00:13:26.522593  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:13:26.540098  792815 provision.go:87] duration metric: took 301.04687ms to configureAuth
	I1208 00:13:26.540168  792815 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:13:26.540401  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:26.540518  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.557193  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:26.557500  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:26.557519  792815 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:13:26.866482  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:13:26.866508  792815 machine.go:97] duration metric: took 4.144879175s to provisionDockerMachine
	I1208 00:13:26.866518  792815 client.go:176] duration metric: took 10.547916822s to LocalClient.Create
	I1208 00:13:26.866532  792815 start.go:167] duration metric: took 10.547978755s to libmachine.API.Create "addons-429840"
	I1208 00:13:26.866538  792815 start.go:293] postStartSetup for "addons-429840" (driver="docker")
	I1208 00:13:26.866549  792815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:13:26.866612  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:13:26.866658  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.884226  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:26.990521  792815 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:13:26.993577  792815 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:13:26.993611  792815 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:13:26.993622  792815 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:13:26.993689  792815 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:13:26.993718  792815 start.go:296] duration metric: took 127.173551ms for postStartSetup
	I1208 00:13:26.994028  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:27.013906  792815 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json ...
	I1208 00:13:27.014259  792815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:13:27.014305  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.031667  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.135983  792815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:13:27.140788  792815 start.go:128] duration metric: took 10.82597119s to createHost
	I1208 00:13:27.140811  792815 start.go:83] releasing machines lock for "addons-429840", held for 10.826097772s
	I1208 00:13:27.140886  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:27.157644  792815 ssh_runner.go:195] Run: cat /version.json
	I1208 00:13:27.157698  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.157723  792815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:13:27.157793  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.176307  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.198351  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.278580  792815 ssh_runner.go:195] Run: systemctl --version
	I1208 00:13:27.370912  792815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:13:27.407918  792815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:13:27.412083  792815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:13:27.412158  792815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:13:27.440356  792815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 00:13:27.440380  792815 start.go:496] detecting cgroup driver to use...
	I1208 00:13:27.440413  792815 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:13:27.440462  792815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:13:27.458951  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:13:27.471263  792815 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:13:27.471325  792815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:13:27.488732  792815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:13:27.507459  792815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:13:27.622654  792815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:13:27.740910  792815 docker.go:234] disabling docker service ...
	I1208 00:13:27.741022  792815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:13:27.761528  792815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:13:27.774428  792815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:13:27.893954  792815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:13:28.013704  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:13:28.027014  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:13:28.041586  792815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:13:28.041669  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.050956  792815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:13:28.051066  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.060124  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.069059  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.078041  792815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:13:28.086016  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.095515  792815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.109067  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.117988  792815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:13:28.125756  792815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:13:28.133217  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:28.251789  792815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:13:28.421453  792815 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:13:28.421571  792815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:13:28.425303  792815 start.go:564] Will wait 60s for crictl version
	I1208 00:13:28.425391  792815 ssh_runner.go:195] Run: which crictl
	I1208 00:13:28.428767  792815 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:13:28.452538  792815 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:13:28.452682  792815 ssh_runner.go:195] Run: crio --version
	I1208 00:13:28.480776  792815 ssh_runner.go:195] Run: crio --version
	I1208 00:13:28.510887  792815 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 00:13:28.513704  792815 cli_runner.go:164] Run: docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:13:28.528136  792815 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:13:28.531986  792815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:13:28.541463  792815 kubeadm.go:884] updating cluster {Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:13:28.541585  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:13:28.541649  792815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:13:28.574230  792815 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:13:28.574265  792815 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:13:28.574322  792815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:13:28.598366  792815 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:13:28.598388  792815 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:13:28.598395  792815 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1208 00:13:28.598481  792815 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-429840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:13:28.598569  792815 ssh_runner.go:195] Run: crio config
	I1208 00:13:28.671663  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:13:28.671689  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:13:28.671716  792815 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:13:28.671741  792815 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-429840 NodeName:addons-429840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:13:28.671870  792815 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-429840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:13:28.671949  792815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 00:13:28.679651  792815 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:13:28.679739  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:13:28.687320  792815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1208 00:13:28.700053  792815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 00:13:28.712968  792815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1208 00:13:28.725694  792815 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:13:28.729182  792815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:13:28.738820  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:28.857054  792815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:13:28.872825  792815 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840 for IP: 192.168.49.2
	I1208 00:13:28.872892  792815 certs.go:195] generating shared ca certs ...
	I1208 00:13:28.872923  792815 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:28.873085  792815 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:13:29.119830  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt ...
	I1208 00:13:29.119865  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt: {Name:mk1cf232fd20a2ae24bd50dbd542c389d0d66187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.120074  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key ...
	I1208 00:13:29.120088  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key: {Name:mk9c510fcf2ada02d3cca2ea71edca904ff4699f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.120175  792815 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:13:29.309653  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt ...
	I1208 00:13:29.309687  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt: {Name:mkdffa916881131a76043035720c06d3bb1d8b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.309872  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key ...
	I1208 00:13:29.309886  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key: {Name:mk7e31d42fb266508928bf35f3347873ccd52074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.309989  792815 certs.go:257] generating profile certs ...
	I1208 00:13:29.310049  792815 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key
	I1208 00:13:29.310064  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt with IP's: []
	I1208 00:13:29.443820  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt ...
	I1208 00:13:29.443851  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: {Name:mk5ad7c34d54d7c05122259765e9864cc409f97c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.444032  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key ...
	I1208 00:13:29.444045  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key: {Name:mk14ffc1607ea261e62d795566b07b2bf6abae1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.444124  792815 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e
	I1208 00:13:29.444144  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1208 00:13:29.678576  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e ...
	I1208 00:13:29.678608  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e: {Name:mka44f27223477651c3a6f063e74685ca2941c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.678779  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e ...
	I1208 00:13:29.678794  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e: {Name:mka77b1e4f7987fc0c84b9659704fd9b5a8aba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.678897  792815 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt
	I1208 00:13:29.678980  792815 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key
	I1208 00:13:29.679032  792815 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key
	I1208 00:13:29.679052  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt with IP's: []
	I1208 00:13:30.038062  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt ...
	I1208 00:13:30.038103  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt: {Name:mkf969471ee6ea587184950d7175a9fb73a26f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:30.038298  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key ...
	I1208 00:13:30.038308  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key: {Name:mkb9bfe0b781f5b511702d33b0a7dadc83334f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:30.038500  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:13:30.038541  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:13:30.038568  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:13:30.038602  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:13:30.039258  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:13:30.063178  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:13:30.085264  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:13:30.105681  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:13:30.125422  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 00:13:30.145609  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:13:30.164632  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:13:30.184323  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:13:30.203443  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:13:30.222860  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:13:30.236256  792815 ssh_runner.go:195] Run: openssl version
	I1208 00:13:30.242448  792815 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.250475  792815 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:13:30.258276  792815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.262055  792815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.262133  792815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.308127  792815 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:13:30.315644  792815 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 00:13:30.323018  792815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:13:30.326489  792815 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 00:13:30.326569  792815 kubeadm.go:401] StartCluster: {Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:13:30.326665  792815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:13:30.326733  792815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:13:30.353613  792815 cri.go:89] found id: ""
	I1208 00:13:30.353689  792815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:13:30.361543  792815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:13:30.369264  792815 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:13:30.369370  792815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:13:30.376937  792815 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:13:30.376957  792815 kubeadm.go:158] found existing configuration files:
	
	I1208 00:13:30.377007  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 00:13:30.384371  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:13:30.384437  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:13:30.391688  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 00:13:30.399360  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:13:30.399475  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:13:30.407103  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 00:13:30.414728  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:13:30.414817  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:13:30.422112  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 00:13:30.429747  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:13:30.429861  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:13:30.437525  792815 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:13:30.480107  792815 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 00:13:30.480564  792815 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:13:30.508937  792815 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:13:30.509084  792815 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:13:30.509154  792815 kubeadm.go:319] OS: Linux
	I1208 00:13:30.509239  792815 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:13:30.509316  792815 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:13:30.509401  792815 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:13:30.509507  792815 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:13:30.509598  792815 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:13:30.509678  792815 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:13:30.509768  792815 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:13:30.509824  792815 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:13:30.509874  792815 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:13:30.589363  792815 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:13:30.589480  792815 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:13:30.589575  792815 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:13:30.598132  792815 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:13:30.604836  792815 out.go:252]   - Generating certificates and keys ...
	I1208 00:13:30.604934  792815 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:13:30.605005  792815 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:13:31.303268  792815 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 00:13:31.502142  792815 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 00:13:31.930905  792815 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 00:13:32.417587  792815 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 00:13:32.877387  792815 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 00:13:32.877526  792815 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-429840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:13:33.281408  792815 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 00:13:33.281689  792815 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-429840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:13:34.461418  792815 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 00:13:34.839197  792815 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 00:13:35.372257  792815 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 00:13:35.372504  792815 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:13:35.748957  792815 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:13:36.201004  792815 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:13:36.344547  792815 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:13:37.518302  792815 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:13:37.643089  792815 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:13:37.643856  792815 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:13:37.646612  792815 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:13:37.650134  792815 out.go:252]   - Booting up control plane ...
	I1208 00:13:37.650258  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:13:37.650347  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:13:37.650424  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:13:37.668167  792815 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:13:37.668317  792815 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:13:37.676003  792815 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:13:37.676437  792815 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:13:37.676774  792815 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:13:37.811880  792815 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:13:37.812000  792815 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:13:38.313152  792815 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.428113ms
	I1208 00:13:38.316517  792815 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 00:13:38.316609  792815 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1208 00:13:38.316912  792815 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 00:13:38.317003  792815 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 00:13:40.729530  792815 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.412604137s
	I1208 00:13:42.955666  792815 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.639026829s
	I1208 00:13:44.819027  792815 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502324534s
	I1208 00:13:44.852574  792815 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 00:13:44.866728  792815 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 00:13:44.879265  792815 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 00:13:44.879500  792815 kubeadm.go:319] [mark-control-plane] Marking the node addons-429840 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 00:13:44.898901  792815 kubeadm.go:319] [bootstrap-token] Using token: s77b7b.z832n76eowpm6ufx
	I1208 00:13:44.901864  792815 out.go:252]   - Configuring RBAC rules ...
	I1208 00:13:44.902054  792815 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 00:13:44.909304  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 00:13:44.919708  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 00:13:44.928335  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 00:13:44.935951  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 00:13:44.940093  792815 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 00:13:45.238194  792815 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 00:13:45.661199  792815 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 00:13:46.226175  792815 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 00:13:46.227428  792815 kubeadm.go:319] 
	I1208 00:13:46.227501  792815 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 00:13:46.227514  792815 kubeadm.go:319] 
	I1208 00:13:46.227592  792815 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 00:13:46.227600  792815 kubeadm.go:319] 
	I1208 00:13:46.227624  792815 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 00:13:46.227686  792815 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 00:13:46.227740  792815 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 00:13:46.227748  792815 kubeadm.go:319] 
	I1208 00:13:46.227804  792815 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 00:13:46.227812  792815 kubeadm.go:319] 
	I1208 00:13:46.227859  792815 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 00:13:46.227865  792815 kubeadm.go:319] 
	I1208 00:13:46.227916  792815 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 00:13:46.227994  792815 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 00:13:46.228065  792815 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 00:13:46.228073  792815 kubeadm.go:319] 
	I1208 00:13:46.228157  792815 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 00:13:46.228236  792815 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 00:13:46.228244  792815 kubeadm.go:319] 
	I1208 00:13:46.228345  792815 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s77b7b.z832n76eowpm6ufx \
	I1208 00:13:46.228463  792815 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 00:13:46.228487  792815 kubeadm.go:319] 	--control-plane 
	I1208 00:13:46.228498  792815 kubeadm.go:319] 
	I1208 00:13:46.228582  792815 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 00:13:46.228591  792815 kubeadm.go:319] 
	I1208 00:13:46.228672  792815 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s77b7b.z832n76eowpm6ufx \
	I1208 00:13:46.228778  792815 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 00:13:46.232805  792815 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 00:13:46.233040  792815 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:13:46.233146  792815 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:13:46.233163  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:13:46.233178  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:13:46.236388  792815 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 00:13:46.239158  792815 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 00:13:46.243229  792815 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 00:13:46.243251  792815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 00:13:46.256138  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 00:13:46.568869  792815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 00:13:46.569063  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:46.569168  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-429840 minikube.k8s.io/updated_at=2025_12_08T00_13_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-429840 minikube.k8s.io/primary=true
	I1208 00:13:46.825361  792815 ops.go:34] apiserver oom_adj: -16
	I1208 00:13:46.825489  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:47.326155  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:47.826320  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:48.326263  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:48.826322  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:49.325598  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:49.826182  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:50.325555  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:50.408203  792815 kubeadm.go:1114] duration metric: took 3.839198524s to wait for elevateKubeSystemPrivileges
	I1208 00:13:50.408236  792815 kubeadm.go:403] duration metric: took 20.081670133s to StartCluster
	I1208 00:13:50.408256  792815 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:50.408377  792815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:13:50.408762  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:50.408968  792815 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:13:50.409102  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 00:13:50.409347  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:50.409386  792815 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1208 00:13:50.409461  792815 addons.go:70] Setting yakd=true in profile "addons-429840"
	I1208 00:13:50.409478  792815 addons.go:239] Setting addon yakd=true in "addons-429840"
	I1208 00:13:50.409501  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.409950  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.410166  792815 addons.go:70] Setting inspektor-gadget=true in profile "addons-429840"
	I1208 00:13:50.410189  792815 addons.go:239] Setting addon inspektor-gadget=true in "addons-429840"
	I1208 00:13:50.410211  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.410621  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.410937  792815 addons.go:70] Setting metrics-server=true in profile "addons-429840"
	I1208 00:13:50.410960  792815 addons.go:239] Setting addon metrics-server=true in "addons-429840"
	I1208 00:13:50.410999  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.411466  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.414911  792815 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-429840"
	I1208 00:13:50.414947  792815 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-429840"
	I1208 00:13:50.414981  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.415532  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.416483  792815 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-429840"
	I1208 00:13:50.416573  792815 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-429840"
	I1208 00:13:50.416636  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.417232  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429695  792815 addons.go:70] Setting cloud-spanner=true in profile "addons-429840"
	I1208 00:13:50.429718  792815 addons.go:70] Setting storage-provisioner=true in profile "addons-429840"
	I1208 00:13:50.429739  792815 addons.go:239] Setting addon storage-provisioner=true in "addons-429840"
	I1208 00:13:50.429740  792815 addons.go:239] Setting addon cloud-spanner=true in "addons-429840"
	I1208 00:13:50.429773  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.429780  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.430276  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.430354  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429700  792815 addons.go:70] Setting registry=true in profile "addons-429840"
	I1208 00:13:50.434581  792815 addons.go:239] Setting addon registry=true in "addons-429840"
	I1208 00:13:50.434640  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.436009  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.436256  792815 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-429840"
	I1208 00:13:50.436282  792815 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-429840"
	I1208 00:13:50.436563  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429711  792815 addons.go:70] Setting registry-creds=true in profile "addons-429840"
	I1208 00:13:50.449272  792815 addons.go:239] Setting addon registry-creds=true in "addons-429840"
	I1208 00:13:50.449323  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.449818  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.454972  792815 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-429840"
	I1208 00:13:50.455049  792815 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-429840"
	I1208 00:13:50.455080  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.455560  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.462955  792815 addons.go:70] Setting volcano=true in profile "addons-429840"
	I1208 00:13:50.463009  792815 addons.go:239] Setting addon volcano=true in "addons-429840"
	I1208 00:13:50.463050  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.463602  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.485170  792815 addons.go:70] Setting volumesnapshots=true in profile "addons-429840"
	I1208 00:13:50.485368  792815 addons.go:239] Setting addon volumesnapshots=true in "addons-429840"
	I1208 00:13:50.485512  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.486069  792815 addons.go:70] Setting default-storageclass=true in profile "addons-429840"
	I1208 00:13:50.486247  792815 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-429840"
	I1208 00:13:50.487732  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.488117  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.496771  792815 out.go:179] * Verifying Kubernetes components...
	I1208 00:13:50.507009  792815 addons.go:70] Setting gcp-auth=true in profile "addons-429840"
	I1208 00:13:50.508822  792815 mustload.go:66] Loading cluster: addons-429840
	I1208 00:13:50.509155  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:50.517792  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.524857  792815 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1208 00:13:50.530400  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1208 00:13:50.530434  792815 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1208 00:13:50.530507  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.543338  792815 addons.go:70] Setting ingress=true in profile "addons-429840"
	I1208 00:13:50.543418  792815 addons.go:239] Setting addon ingress=true in "addons-429840"
	I1208 00:13:50.543495  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.544031  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.560747  792815 addons.go:70] Setting ingress-dns=true in profile "addons-429840"
	I1208 00:13:50.560800  792815 addons.go:239] Setting addon ingress-dns=true in "addons-429840"
	I1208 00:13:50.560848  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.561365  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.571509  792815 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:13:50.575688  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:50.596508  792815 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1208 00:13:50.597218  792815 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:13:50.597238  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:13:50.597315  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.598321  792815 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1208 00:13:50.628877  792815 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1208 00:13:50.628901  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 00:13:50.628964  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.644793  792815 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1208 00:13:50.598696  792815 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1208 00:13:50.645310  792815 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1208 00:13:50.645325  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1208 00:13:50.599058  792815 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1208 00:13:50.647160  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.647176  792815 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-429840"
	I1208 00:13:50.647219  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.647651  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.657263  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 00:13:50.657283  792815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 00:13:50.657447  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.674910  792815 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 00:13:50.674937  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1208 00:13:50.675002  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.682622  792815 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1208 00:13:50.685752  792815 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 00:13:50.685783  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 00:13:50.685861  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.709876  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 00:13:50.713252  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 00:13:50.719009  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 00:13:50.721951  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 00:13:50.723755  792815 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1208 00:13:50.746816  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 00:13:50.754774  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 00:13:50.758511  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 00:13:50.761041  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1208 00:13:50.761892  792815 addons.go:239] Setting addon default-storageclass=true in "addons-429840"
	I1208 00:13:50.761960  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.762462  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.783172  792815 out.go:179]   - Using image docker.io/registry:3.0.0
	I1208 00:13:50.791686  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 00:13:50.791715  792815 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 00:13:50.791829  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.806807  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.808523  792815 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 00:13:50.808551  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1208 00:13:50.808616  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.815359  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.822942  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 00:13:50.823133  792815 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1208 00:13:50.823205  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1208 00:13:50.823241  792815 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1208 00:13:50.830702  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 00:13:50.830731  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 00:13:50.830818  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.831219  792815 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 00:13:50.831233  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1208 00:13:50.831275  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.859986  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:50.864886  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:50.867842  792815 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 00:13:50.867868  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1208 00:13:50.867947  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.882582  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 00:13:50.894385  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.919900  792815 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 00:13:50.919995  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1208 00:13:50.920113  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.927755  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.929027  792815 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 00:13:50.932920  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.939264  792815 out.go:179]   - Using image docker.io/busybox:stable
	I1208 00:13:50.943277  792815 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 00:13:50.943351  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 00:13:50.943459  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.962250  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.963219  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.963729  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.019358  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.041906  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.043968  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.051006  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.055412  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	W1208 00:13:51.065627  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.065735  792815 retry.go:31] will retry after 291.17741ms: ssh: handshake failed: EOF
	I1208 00:13:51.076396  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	W1208 00:13:51.086769  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.086911  792815 retry.go:31] will retry after 171.704284ms: ssh: handshake failed: EOF
	I1208 00:13:51.090809  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.091598  792815 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:13:51.091616  792815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:13:51.091674  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:51.128850  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.167340  792815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1208 00:13:51.259756  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.259832  792815 retry.go:31] will retry after 516.365027ms: ssh: handshake failed: EOF
	I1208 00:13:51.405649  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1208 00:13:51.405676  792815 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1208 00:13:51.560946  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1208 00:13:51.560975  792815 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1208 00:13:51.735360  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1208 00:13:51.735388  792815 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1208 00:13:51.756323  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1208 00:13:51.773996  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:13:51.776622  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 00:13:51.796567  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 00:13:51.805498  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 00:13:51.805530  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 00:13:51.835852  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 00:13:51.873907  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 00:13:51.913924  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 00:13:51.913953  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 00:13:51.925720  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:13:51.938079  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1208 00:13:51.938120  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1208 00:13:51.956677  792815 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 00:13:51.956704  792815 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 00:13:52.045885  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 00:13:52.060439  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 00:13:52.060466  792815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 00:13:52.108497  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 00:13:52.108524  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 00:13:52.132649  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 00:13:52.138420  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1208 00:13:52.265384  792815 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 00:13:52.265409  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 00:13:52.268902  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 00:13:52.268927  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 00:13:52.334484  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 00:13:52.334525  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 00:13:52.335419  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 00:13:52.335441  792815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 00:13:52.455860  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 00:13:52.503529  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 00:13:52.503569  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 00:13:52.525637  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 00:13:52.533616  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 00:13:52.625152  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 00:13:52.625183  792815 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 00:13:52.796254  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 00:13:52.796293  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 00:13:52.798571  792815 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:52.798593  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 00:13:52.976201  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 00:13:52.976229  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 00:13:52.996675  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:53.241612  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 00:13:53.241638  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 00:13:53.439710  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 00:13:53.439735  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 00:13:53.602661  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 00:13:53.602688  792815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 00:13:53.679751  792815 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.51237932s)
	I1208 00:13:53.679830  792815 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.797222465s)
	I1208 00:13:53.679941  792815 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1208 00:13:53.681093  792815 node_ready.go:35] waiting up to 6m0s for node "addons-429840" to be "Ready" ...
	I1208 00:13:53.850787  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 00:13:53.850807  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 00:13:54.169148  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 00:13:54.169213  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 00:13:54.259865  792815 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-429840" context rescaled to 1 replicas
	I1208 00:13:54.383782  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 00:13:54.383810  792815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 00:13:54.637912  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1208 00:13:55.712083  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:13:56.091894  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.317860878s)
	I1208 00:13:56.092168  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.335817348s)
	I1208 00:13:56.753784  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.917898923s)
	I1208 00:13:56.753842  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.879913806s)
	I1208 00:13:56.753896  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.828155121s)
	I1208 00:13:56.753923  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.708010355s)
	I1208 00:13:56.753980  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.6213067s)
	I1208 00:13:56.754200  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.615733022s)
	I1208 00:13:56.754375  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.957132127s)
	I1208 00:13:56.754461  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.298567982s)
	I1208 00:13:56.754475  792815 addons.go:495] Verifying addon registry=true in "addons-429840"
	I1208 00:13:56.754531  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.228859525s)
	I1208 00:13:56.754553  792815 addons.go:495] Verifying addon metrics-server=true in "addons-429840"
	I1208 00:13:56.754592  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.220935963s)
	I1208 00:13:56.754651  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.978004648s)
	I1208 00:13:56.754658  792815 addons.go:495] Verifying addon ingress=true in "addons-429840"
	I1208 00:13:56.757509  792815 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-429840 service yakd-dashboard -n yakd-dashboard
	
	I1208 00:13:56.759572  792815 out.go:179] * Verifying ingress addon...
	I1208 00:13:56.759609  792815 out.go:179] * Verifying registry addon...
	I1208 00:13:56.764180  792815 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 00:13:56.764180  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 00:13:56.810652  792815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 00:13:56.810682  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:56.811618  792815 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 00:13:56.811645  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:56.834797  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.838077546s)
	W1208 00:13:56.834834  792815 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 00:13:56.834869  792815 retry.go:31] will retry after 357.294897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 00:13:57.192389  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:57.272925  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:57.273154  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:57.317952  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.679990536s)
	I1208 00:13:57.317982  792815 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-429840"
	I1208 00:13:57.321167  792815 out.go:179] * Verifying csi-hostpath-driver addon...
	I1208 00:13:57.324676  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 00:13:57.373572  792815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 00:13:57.373641  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:57.768230  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:57.768561  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:57.868682  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:13:58.184830  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:13:58.268158  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:58.268303  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:58.328187  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:58.429320  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 00:13:58.429467  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:58.446707  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:58.576166  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 00:13:58.589648  792815 addons.go:239] Setting addon gcp-auth=true in "addons-429840"
	I1208 00:13:58.589697  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:58.590189  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:58.609064  792815 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 00:13:58.609118  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:58.625827  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:58.768532  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:58.768815  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:58.828548  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.268183  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:59.268446  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:59.328354  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.769159  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:59.769880  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:59.827640  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.939956  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.747479152s)
	I1208 00:13:59.940066  792815 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.330968377s)
	I1208 00:13:59.943373  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:59.946198  792815 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1208 00:13:59.949146  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 00:13:59.949175  792815 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 00:13:59.962836  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 00:13:59.962973  792815 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 00:13:59.975970  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 00:13:59.975993  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1208 00:13:59.989283  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1208 00:14:00.203853  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:00.275411  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:00.275668  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:00.335678  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:00.701788  792815 addons.go:495] Verifying addon gcp-auth=true in "addons-429840"
	I1208 00:14:00.704935  792815 out.go:179] * Verifying gcp-auth addon...
	I1208 00:14:00.708596  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 00:14:00.715345  792815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 00:14:00.715370  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:00.768259  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:00.768328  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:00.828460  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:01.212161  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:01.267650  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:01.268031  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:01.328202  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:01.717422  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:01.767884  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:01.769243  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:01.827881  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:02.212256  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:02.267653  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:02.268028  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:02.328266  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:02.683946  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:02.711787  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:02.768107  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:02.768788  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:02.827998  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:03.212301  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:03.267377  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:03.267774  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:03.328043  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:03.712256  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:03.767503  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:03.768096  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:03.828158  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:04.212347  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:04.268134  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:04.268336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:04.328248  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:04.712494  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:04.767684  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:04.767760  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:04.827872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:05.184450  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:05.212495  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:05.267679  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:05.268076  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:05.328378  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:05.712424  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:05.767834  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:05.767982  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:05.827842  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:06.212137  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:06.268247  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:06.268384  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:06.327945  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:06.712243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:06.768175  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:06.768234  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:06.828329  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:07.211872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:07.267855  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:07.267975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:07.328874  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:07.683777  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:07.711734  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:07.768112  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:07.768397  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:07.828308  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:08.212450  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:08.267683  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:08.267838  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:08.327680  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:08.712350  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:08.767321  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:08.767622  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:08.827550  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:09.211288  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:09.269747  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:09.270244  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:09.328078  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:09.684152  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:09.712213  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:09.767570  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:09.767637  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:09.828325  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:10.211989  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:10.268134  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:10.268704  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:10.327832  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:10.711973  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:10.767962  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:10.768194  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:10.828179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:11.212094  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:11.267874  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:11.268056  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:11.327553  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:11.684494  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:11.712875  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:11.767649  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:11.768298  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:11.828232  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:12.212323  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:12.267765  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:12.267951  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:12.328618  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:12.711796  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:12.767816  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:12.768128  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:12.827863  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:13.212171  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:13.267258  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:13.267544  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:13.328457  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:13.712938  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:13.767978  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:13.768243  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:13.827999  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:14.183813  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:14.211902  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:14.267861  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:14.268240  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:14.327735  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:14.711492  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:14.767842  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:14.767975  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:14.827629  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:15.211771  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:15.267668  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:15.268146  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:15.328044  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:15.712646  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:15.767596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:15.767728  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:15.827540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:16.184470  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:16.212575  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:16.267637  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:16.267699  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:16.327522  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:16.712141  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:16.768128  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:16.768296  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:16.827972  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:17.211661  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:17.267785  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:17.267887  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:17.327705  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:17.712434  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:17.767246  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:17.767478  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:17.828265  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:18.213074  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:18.268345  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:18.268461  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:18.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:18.684423  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:18.712534  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:18.767506  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:18.767721  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:18.828311  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:19.211427  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:19.267373  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:19.267522  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:19.328359  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:19.712707  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:19.767565  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:19.767889  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:19.827599  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:20.212076  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:20.268533  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:20.269050  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:20.327975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:20.712418  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:20.767947  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:20.768082  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:20.827646  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:21.184350  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:21.212319  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:21.267450  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:21.267696  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:21.328851  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:21.711936  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:21.767891  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:21.768244  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:21.827795  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:22.211750  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:22.268206  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:22.268441  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:22.328188  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:22.711850  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:22.768109  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:22.768214  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:22.828474  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:23.184585  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:23.211324  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:23.268697  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:23.269253  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:23.327970  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:23.711393  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:23.767387  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:23.767489  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:23.828127  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:24.211745  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:24.267840  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:24.267916  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:24.327931  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:24.711861  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:24.768188  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:24.768317  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:24.827454  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:25.186461  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:25.216494  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:25.267686  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:25.267699  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:25.327482  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:25.712461  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:25.767359  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:25.767516  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:25.828439  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:26.212169  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:26.267401  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:26.267846  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:26.327775  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:26.711895  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:26.768946  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:26.769703  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:26.827522  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:27.211605  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:27.267684  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:27.267932  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:27.328001  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:27.683747  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:27.711694  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:27.768129  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:27.768289  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:27.827761  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:28.211980  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:28.268130  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:28.268590  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:28.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:28.711526  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:28.767780  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:28.767813  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:28.828344  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:29.212111  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:29.268193  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:29.268817  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:29.327655  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:29.684730  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:29.711239  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:29.767083  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:29.767160  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:29.827526  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:30.212495  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:30.267697  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:30.267865  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:30.327780  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:30.712019  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:30.768635  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:30.768767  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:30.828632  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:31.211470  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:31.267652  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:31.268027  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:31.327542  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:31.684798  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:31.711596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:31.767540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:31.767688  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:31.828322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:32.212357  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:32.267698  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:32.267769  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:32.327574  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:32.685704  792815 node_ready.go:49] node "addons-429840" is "Ready"
	I1208 00:14:32.685740  792815 node_ready.go:38] duration metric: took 39.004623693s for node "addons-429840" to be "Ready" ...
	I1208 00:14:32.685756  792815 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:14:32.685818  792815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:14:32.704587  792815 api_server.go:72] duration metric: took 42.295583668s to wait for apiserver process to appear ...
	I1208 00:14:32.704615  792815 api_server.go:88] waiting for apiserver healthz status ...
	I1208 00:14:32.704633  792815 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1208 00:14:32.712966  792815 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1208 00:14:32.714584  792815 api_server.go:141] control plane version: v1.34.2
	I1208 00:14:32.714617  792815 api_server.go:131] duration metric: took 9.995632ms to wait for apiserver health ...
	I1208 00:14:32.714627  792815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 00:14:32.720706  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:32.728084  792815 system_pods.go:59] 19 kube-system pods found
	I1208 00:14:32.728123  792815 system_pods.go:61] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending
	I1208 00:14:32.728131  792815 system_pods.go:61] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:32.728135  792815 system_pods.go:61] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:32.728139  792815 system_pods.go:61] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:32.728142  792815 system_pods.go:61] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:32.728146  792815 system_pods.go:61] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:32.728150  792815 system_pods.go:61] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:32.728154  792815 system_pods.go:61] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:32.728158  792815 system_pods.go:61] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:32.728163  792815 system_pods.go:61] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:32.728171  792815 system_pods.go:61] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:32.728178  792815 system_pods.go:61] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:32.728196  792815 system_pods.go:61] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:32.728203  792815 system_pods.go:61] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending
	I1208 00:14:32.728210  792815 system_pods.go:61] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:32.728220  792815 system_pods.go:61] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:32.728224  792815 system_pods.go:61] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending
	I1208 00:14:32.728229  792815 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending
	I1208 00:14:32.728233  792815 system_pods.go:61] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:32.728239  792815 system_pods.go:74] duration metric: took 13.605827ms to wait for pod list to return data ...
	I1208 00:14:32.728256  792815 default_sa.go:34] waiting for default service account to be created ...
	I1208 00:14:32.733614  792815 default_sa.go:45] found service account: "default"
	I1208 00:14:32.733647  792815 default_sa.go:55] duration metric: took 5.378378ms for default service account to be created ...
	I1208 00:14:32.733657  792815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 00:14:32.740546  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:32.740590  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending
	I1208 00:14:32.740597  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:32.740602  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:32.740609  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:32.740614  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:32.740620  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:32.740624  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:32.740629  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:32.740634  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:32.740638  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:32.740644  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:32.740653  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:32.740670  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:32.740685  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:32.740694  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:32.740702  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:32.740706  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending
	I1208 00:14:32.740711  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending
	I1208 00:14:32.740715  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:32.740729  792815 retry.go:31] will retry after 271.192312ms: missing components: kube-dns
	I1208 00:14:32.848377  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:32.883302  792815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 00:14:32.883322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:32.883593  792815 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 00:14:32.883608  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:33.017280  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.017329  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.017337  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:33.017344  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:33.017348  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:33.017352  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.017356  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.017362  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.017366  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.017379  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:33.017389  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.017393  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.017399  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.017409  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:33.017415  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.017421  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.017429  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:33.017437  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.017451  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.017459  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:33.017476  792815 retry.go:31] will retry after 291.352747ms: missing components: kube-dns
	I1208 00:14:33.236343  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:33.319349  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.319383  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.319392  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:33.319408  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:33.319415  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:33.319423  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.319429  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.319439  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.319443  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.319448  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:33.319459  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.319463  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.319469  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.319488  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:33.319495  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.319502  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.319511  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:33.319517  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.319524  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.319536  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 00:14:33.319551  792815 retry.go:31] will retry after 378.336421ms: missing components: kube-dns
	I1208 00:14:33.321072  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:33.325250  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:33.334764  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:33.702512  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.702593  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.702618  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 00:14:33.702640  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:33.702678  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:33.702696  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.702715  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.702734  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.702761  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.702785  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 00:14:33.702803  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.702820  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.702873  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.702898  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:33.702921  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.702941  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.702971  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:33.702992  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.703023  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.703042  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 00:14:33.703082  792815 retry.go:31] will retry after 375.454237ms: missing components: kube-dns
	I1208 00:14:33.713358  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:33.768010  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:33.769069  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:33.828151  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:34.088866  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:34.088952  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Running
	I1208 00:14:34.088978  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 00:14:34.089017  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:34.089048  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:34.089066  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:34.089087  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:34.089105  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:34.089133  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:34.089160  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 00:14:34.089180  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:34.089198  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:34.089218  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:34.089248  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:34.089273  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:34.089299  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:34.089319  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:34.089351  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:34.089375  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:34.089396  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Running
	I1208 00:14:34.089418  792815 system_pods.go:126] duration metric: took 1.355754761s to wait for k8s-apps to be running ...
	I1208 00:14:34.089448  792815 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 00:14:34.089524  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:14:34.103729  792815 system_svc.go:56] duration metric: took 14.282716ms WaitForService to wait for kubelet
	I1208 00:14:34.103799  792815 kubeadm.go:587] duration metric: took 43.694799911s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:14:34.103832  792815 node_conditions.go:102] verifying NodePressure condition ...
	I1208 00:14:34.107096  792815 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 00:14:34.107169  792815 node_conditions.go:123] node cpu capacity is 2
	I1208 00:14:34.107199  792815 node_conditions.go:105] duration metric: took 3.348129ms to run NodePressure ...
	I1208 00:14:34.107223  792815 start.go:242] waiting for startup goroutines ...
	I1208 00:14:34.212139  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:34.268582  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:34.268921  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:34.328953  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:34.712996  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:34.769381  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:34.769802  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:34.827963  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:35.212735  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:35.312894  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:35.313395  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:35.332473  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:35.711872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:35.769118  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:35.769296  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:35.828456  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:36.213282  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:36.270456  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:36.270965  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:36.335696  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:36.713364  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:36.770026  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:36.770350  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:36.829966  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:37.212191  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:37.268197  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:37.269581  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:37.330014  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:37.713179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:37.769982  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:37.770400  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:37.828550  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:38.211463  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:38.267832  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:38.268336  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:38.328736  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:38.711484  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:38.769588  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:38.770523  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:38.829567  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:39.212554  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:39.269782  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:39.270260  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:39.328981  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:39.712698  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:39.769792  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:39.770315  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:39.828855  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:40.212063  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:40.267983  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:40.268019  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:40.329744  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:40.712979  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:40.769954  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:40.770646  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:40.827907  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:41.212501  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:41.269512  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:41.269848  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:41.328606  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:41.712544  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:41.768906  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:41.769059  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:41.827998  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:42.212585  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:42.269283  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:42.269587  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:42.328431  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:42.711607  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:42.768082  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:42.768427  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:42.828843  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:43.212865  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:43.270182  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:43.270591  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:43.329642  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:43.711868  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:43.770067  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:43.770428  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:43.832078  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:44.213127  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:44.269187  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:44.269287  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:44.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:44.712084  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:44.769534  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:44.769749  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:44.827702  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:45.213899  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:45.269794  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:45.270431  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:45.328482  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:45.711907  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:45.768825  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:45.770157  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:45.827852  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:46.212294  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:46.268569  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:46.268809  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:46.329703  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:46.711895  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:46.769297  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:46.770104  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:46.828713  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:47.212459  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:47.268906  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:47.269149  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:47.328819  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:47.712223  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:47.768324  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:47.768443  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:47.829047  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:48.211551  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:48.268643  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:48.268791  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:48.329216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:48.711642  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:48.768552  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:48.768688  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:48.828017  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:49.211946  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:49.268450  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:49.268572  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:49.369173  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:49.712824  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:49.768042  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:49.768218  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:49.828389  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:50.211999  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:50.268645  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:50.269189  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:50.328153  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:50.713049  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:50.820409  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:50.820975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:50.831381  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:51.212684  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:51.268718  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:51.268939  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:51.328242  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:51.712863  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:51.768303  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:51.768954  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:51.828506  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:52.211916  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:52.268656  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:52.269425  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:52.329375  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:52.711797  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:52.772576  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:52.773182  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:52.831277  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:53.213067  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:53.313919  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:53.314282  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:53.328985  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:53.712761  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:53.769194  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:53.769368  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:53.828890  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:54.227760  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:54.328531  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:54.328897  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:54.334119  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:54.715785  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:54.816603  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:54.816838  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:54.829528  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:55.212322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:55.267707  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:55.278301  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:55.331102  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:55.713710  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:55.767880  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:55.768072  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:55.828225  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:56.218919  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:56.323491  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:56.324602  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:56.329773  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:56.712378  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:56.769315  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:56.769606  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:56.828932  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:57.214016  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:57.268462  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:57.269087  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:57.329236  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:57.713062  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:57.769995  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:57.770325  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:57.828443  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:58.212258  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:58.268329  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:58.273077  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:58.328791  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:58.712972  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:58.769736  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:58.770436  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:58.829179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:59.212003  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:59.269893  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:59.270021  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:59.328336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:59.712536  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:59.768793  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:59.768936  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:59.828319  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:00.305980  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:00.306324  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:00.306779  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:00.425621  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:00.713645  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:00.771299  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:00.771485  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:00.831594  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:01.211981  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:01.268867  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:01.269237  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:01.328821  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:01.712379  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:01.767797  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:01.767937  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:01.828216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:02.212278  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:02.269669  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:02.270043  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:02.328757  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:02.712585  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:02.813577  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:02.813984  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:02.913948  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:03.212609  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:03.269449  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:03.269541  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:03.329452  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:03.712023  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:03.769510  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:03.769663  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:03.827822  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:04.212483  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:04.272230  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:04.272591  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:04.329271  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:04.711984  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:04.813331  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:04.813446  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:04.828637  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:05.212910  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:05.280892  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:05.281328  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:05.329672  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:05.711948  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:05.768711  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:05.768898  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:05.828491  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:06.211889  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:06.269269  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:06.269751  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:06.328271  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:06.711503  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:06.769602  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:06.769760  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:06.828113  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:07.212723  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:07.269856  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:07.270074  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:07.328058  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:07.712362  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:07.768941  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:07.769920  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:07.828758  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:08.212216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:08.268317  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:08.268504  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:08.328469  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:08.711519  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:08.776373  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:08.776563  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:08.875811  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:09.212107  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:09.268834  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:09.268975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:09.336383  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:09.712280  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:09.767500  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:09.767831  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:09.827851  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:10.212595  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:10.268348  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:10.269063  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:10.328900  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:10.712417  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:10.768834  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:10.768997  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:10.828744  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:11.211644  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:11.269648  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:11.269788  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:11.328056  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:11.712728  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:11.768304  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:11.768441  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:11.828596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:12.212499  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:12.268483  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:12.269842  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:12.328128  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:12.712076  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:12.769542  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:12.769702  792815 kapi.go:107] duration metric: took 1m16.005525778s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 00:15:12.828289  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:13.212580  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:13.313741  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:13.335219  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:13.712194  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:13.769081  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:13.829421  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:14.211836  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:14.269299  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:14.329745  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:14.712563  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:14.767395  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:14.828347  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:15.212014  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:15.268319  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:15.328107  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:15.712226  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:15.767219  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:15.828316  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:16.211596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:16.267297  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:16.328186  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:16.712027  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:16.768486  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:16.828477  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:17.212015  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:17.269836  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:17.337273  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:17.712490  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:17.768301  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:17.829309  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:18.212318  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:18.268032  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:18.329594  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:18.712531  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:18.767622  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:18.827818  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:19.211786  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:19.268205  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:19.328715  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:19.720180  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:19.815334  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:19.916584  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:20.213137  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:20.313262  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:20.329013  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:20.712195  792815 kapi.go:107] duration metric: took 1m20.003600343s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 00:15:20.715451  792815 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-429840 cluster.
	I1208 00:15:20.718449  792815 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 00:15:20.721393  792815 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 00:15:20.767437  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:20.828539  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:21.268066  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:21.328288  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:21.768460  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:21.829988  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:22.268268  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:22.328963  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:22.768369  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:22.829540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:23.268134  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:23.328336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:23.767820  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:23.828126  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:24.267190  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:24.328300  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:24.768447  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:24.828721  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:25.268429  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:25.328356  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:25.769443  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:25.829063  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:26.267340  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:26.332265  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:26.767762  792815 kapi.go:107] duration metric: took 1m30.003580281s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 00:15:26.828145  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:27.328100  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:27.837360  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:28.329213  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:28.829900  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:29.329154  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:29.829013  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:30.328395  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:30.828580  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:31.327814  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:31.828925  792815 kapi.go:107] duration metric: took 1m34.504244516s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 00:15:31.832104  792815 out.go:179] * Enabled addons: inspektor-gadget, default-storageclass, ingress-dns, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1208 00:15:31.834929  792815 addons.go:530] duration metric: took 1m41.425531543s for enable addons: enabled=[inspektor-gadget default-storageclass ingress-dns amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1208 00:15:31.834990  792815 start.go:247] waiting for cluster config update ...
	I1208 00:15:31.835017  792815 start.go:256] writing updated cluster config ...
	I1208 00:15:31.835320  792815 ssh_runner.go:195] Run: rm -f paused
	I1208 00:15:31.840010  792815 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 00:15:31.843388  792815 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vjrlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.848866  792815 pod_ready.go:94] pod "coredns-66bc5c9577-vjrlp" is "Ready"
	I1208 00:15:31.848894  792815 pod_ready.go:86] duration metric: took 5.475109ms for pod "coredns-66bc5c9577-vjrlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.851566  792815 pod_ready.go:83] waiting for pod "etcd-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.856549  792815 pod_ready.go:94] pod "etcd-addons-429840" is "Ready"
	I1208 00:15:31.856589  792815 pod_ready.go:86] duration metric: took 4.9964ms for pod "etcd-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.859007  792815 pod_ready.go:83] waiting for pod "kube-apiserver-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.863615  792815 pod_ready.go:94] pod "kube-apiserver-addons-429840" is "Ready"
	I1208 00:15:31.863648  792815 pod_ready.go:86] duration metric: took 4.612315ms for pod "kube-apiserver-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.865919  792815 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.244288  792815 pod_ready.go:94] pod "kube-controller-manager-addons-429840" is "Ready"
	I1208 00:15:32.244316  792815 pod_ready.go:86] duration metric: took 378.366929ms for pod "kube-controller-manager-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.444601  792815 pod_ready.go:83] waiting for pod "kube-proxy-29dtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.844828  792815 pod_ready.go:94] pod "kube-proxy-29dtj" is "Ready"
	I1208 00:15:32.844857  792815 pod_ready.go:86] duration metric: took 400.228555ms for pod "kube-proxy-29dtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.044273  792815 pod_ready.go:83] waiting for pod "kube-scheduler-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.444228  792815 pod_ready.go:94] pod "kube-scheduler-addons-429840" is "Ready"
	I1208 00:15:33.444255  792815 pod_ready.go:86] duration metric: took 399.956904ms for pod "kube-scheduler-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.444269  792815 pod_ready.go:40] duration metric: took 1.604224653s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 00:15:33.507809  792815 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 00:15:33.510815  792815 out.go:179] * Done! kubectl is now configured to use "addons-429840" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 00:18:18 addons-429840 crio[831]: time="2025-12-08T00:18:18.229353325Z" level=info msg="Removed container ecc46bf2402d34e133c6c8032c803b659bb432c001f4944fafadf8773f019d4b: kube-system/registry-creds-764b6fb674-2h5gp/registry-creds" id=b0c2ae65-200f-43e6-824e-b29e75bed90d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.667539591Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9fzbz/POD" id=54f585d9-c3f4-4c0c-95ff-b40f21ae538a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.667604231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.678555794Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9fzbz Namespace:default ID:96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71 UID:a5d940f1-ddd4-49a5-883f-1fbffcfbdd22 NetNS:/var/run/netns/ac5dc46f-f186-4ee1-ba63-89823302f096 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d030}] Aliases:map[]}"
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.678789782Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9fzbz to CNI network \"kindnet\" (type=ptp)"
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.694794665Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9fzbz Namespace:default ID:96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71 UID:a5d940f1-ddd4-49a5-883f-1fbffcfbdd22 NetNS:/var/run/netns/ac5dc46f-f186-4ee1-ba63-89823302f096 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d030}] Aliases:map[]}"
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.695111082Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9fzbz for CNI network kindnet (type=ptp)"
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.705559658Z" level=info msg="Ran pod sandbox 96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71 with infra container: default/hello-world-app-5d498dc89-9fzbz/POD" id=54f585d9-c3f4-4c0c-95ff-b40f21ae538a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.706953863Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8ccabe0d-466b-40c5-89cb-ff30482ed8b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.707189771Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=8ccabe0d-466b-40c5-89cb-ff30482ed8b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.707304693Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=8ccabe0d-466b-40c5-89cb-ff30482ed8b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.70828323Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e3f5f1f3-f830-4000-bb03-e5d1d7480944 name=/runtime.v1.ImageService/PullImage
	Dec 08 00:18:38 addons-429840 crio[831]: time="2025-12-08T00:18:38.71136149Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.295307615Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=e3f5f1f3-f830-4000-bb03-e5d1d7480944 name=/runtime.v1.ImageService/PullImage
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.29609653Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=71d65108-e3fa-4630-b439-9be8349bc0d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.299449598Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=661cda86-8122-40b9-98bc-c62dc10404ed name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.307094214Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-9fzbz/hello-world-app" id=a3e0c51a-42eb-4aef-94ff-0ac2a8f119cd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.307233309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.320653533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.321022989Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/de9dafc3742381b85571078450217d6e0a6bc7fda0d5b4e32f2d2f2d146fb760/merged/etc/passwd: no such file or directory"
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.321075133Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/de9dafc3742381b85571078450217d6e0a6bc7fda0d5b4e32f2d2f2d146fb760/merged/etc/group: no such file or directory"
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.321489299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.341356999Z" level=info msg="Created container 64b88dfe84aff2a269c425a29463e37555e41029455be459496c7a3f7f73c028: default/hello-world-app-5d498dc89-9fzbz/hello-world-app" id=a3e0c51a-42eb-4aef-94ff-0ac2a8f119cd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.347586651Z" level=info msg="Starting container: 64b88dfe84aff2a269c425a29463e37555e41029455be459496c7a3f7f73c028" id=d927bd6b-2f16-4973-ac5f-b38563987d94 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 00:18:39 addons-429840 crio[831]: time="2025-12-08T00:18:39.353950015Z" level=info msg="Started container" PID=7154 containerID=64b88dfe84aff2a269c425a29463e37555e41029455be459496c7a3f7f73c028 description=default/hello-world-app-5d498dc89-9fzbz/hello-world-app id=d927bd6b-2f16-4973-ac5f-b38563987d94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	64b88dfe84aff       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   96922e2f78dd3       hello-world-app-5d498dc89-9fzbz            default
	a747f3bc0b95e       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             22 seconds ago           Exited              registry-creds                           4                   407ac9c413609       registry-creds-764b6fb674-2h5gp            kube-system
	407d7c82f4ad9       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   fcba0333b832a       nginx                                      default
	2b8611578b35a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   6a4afe108103e       busybox                                    default
	51022c4a75880       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	16ef54fec815f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	d7a58ce04c20d       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	9a5a7433c6610       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	8fecdf93ed323       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   ec64e0d6b6f8c       ingress-nginx-controller-6c8bf45fb-p78l4   ingress-nginx
	1d51443c465f7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   a7e57ff0d712e       gcp-auth-78565c9fb4-sfdxb                  gcp-auth
	28fe644efa1f3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   57914ac67def8       gadget-c4kp7                               gadget
	22bde17100e41       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	56946171c705e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   0c594abfc8fc3       registry-proxy-9vjr9                       kube-system
	aec26cd72a02e       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   59babd64dc4b6       nvidia-device-plugin-daemonset-g6445       kube-system
	3fd9d7897b3fa       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   4f5736472b410       csi-hostpath-resizer-0                     kube-system
	8eec18b6c152f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   e40fe2b94ac6f       csi-hostpath-attacher-0                    kube-system
	ac1cbf091afea       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   5cf606d3736e0       ingress-nginx-admission-patch-qqch7        ingress-nginx
	88fac620cb5dd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   e7409507b9019       yakd-dashboard-5ff678cb9-2jf6z             yakd-dashboard
	2576d4f9f4d72       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   ec34c9d53f908       ingress-nginx-admission-create-226t7       ingress-nginx
	eb7e7a7efc043       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   ba32c0548dc1e       registry-6b586f9694-p77p6                  kube-system
	0695bc22a1299       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	777823a1b3e68       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   665894036b3d5       snapshot-controller-7d9fbc56b8-rh7x7       kube-system
	220ce0d6bf3d5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   24f800b14b818       metrics-server-85b7d694d7-9z5hq            kube-system
	9e848158b2cbc       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   376c1b482ebd3       local-path-provisioner-648f6765c9-8rr8f    local-path-storage
	2d6f8acedf212       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   0a106f7566d81       cloud-spanner-emulator-5bdddb765-d4kr8     default
	91a9d71fa2558       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   ba8dbaaba1a2e       snapshot-controller-7d9fbc56b8-675j4       kube-system
	25f99dffaa8ed       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   9598a6cbe996e       kube-ingress-dns-minikube                  kube-system
	87d0a5e3d7fbb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   6abf234907a7b       coredns-66bc5c9577-vjrlp                   kube-system
	f877c300e548d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   462f4f11e5991       storage-provisioner                        kube-system
	49a9b28a64519       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   e3d98de48b4a7       kindnet-zcvnv                              kube-system
	1c7ec16efebcb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   bff42258c26fe       kube-proxy-29dtj                           kube-system
	8bf8d2ee6f616       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   1ca586a02bbe4       kube-apiserver-addons-429840               kube-system
	2fba6529a9c34       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   463e82dea41ce       kube-scheduler-addons-429840               kube-system
	01230e11e24c3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   4705986f530ca       etcd-addons-429840                         kube-system
	92f126df047be       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   cc5914885f51d       kube-controller-manager-addons-429840      kube-system
	
	
	==> coredns [87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960] <==
	[INFO] 10.244.0.18:47445 - 40452 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002630107s
	[INFO] 10.244.0.18:47445 - 18648 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000113134s
	[INFO] 10.244.0.18:47445 - 51105 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014497s
	[INFO] 10.244.0.18:48287 - 36809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160043s
	[INFO] 10.244.0.18:48287 - 36356 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071599s
	[INFO] 10.244.0.18:37946 - 25739 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095566s
	[INFO] 10.244.0.18:37946 - 25550 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081593s
	[INFO] 10.244.0.18:49717 - 9708 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077555s
	[INFO] 10.244.0.18:49717 - 9271 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182328s
	[INFO] 10.244.0.18:46203 - 3224 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002220044s
	[INFO] 10.244.0.18:46203 - 3397 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002472476s
	[INFO] 10.244.0.18:60437 - 46779 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140071s
	[INFO] 10.244.0.18:60437 - 46910 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132883s
	[INFO] 10.244.0.20:55567 - 61063 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000577901s
	[INFO] 10.244.0.20:59221 - 39027 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014282s
	[INFO] 10.244.0.20:35704 - 25337 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140555s
	[INFO] 10.244.0.20:45333 - 46018 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009962s
	[INFO] 10.244.0.20:56619 - 27765 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125178s
	[INFO] 10.244.0.20:48365 - 44039 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108276s
	[INFO] 10.244.0.20:43533 - 21698 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002100222s
	[INFO] 10.244.0.20:50229 - 28223 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002196987s
	[INFO] 10.244.0.20:33952 - 61830 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000898282s
	[INFO] 10.244.0.20:56611 - 6914 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002263515s
	[INFO] 10.244.0.24:50487 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000167994s
	[INFO] 10.244.0.24:54317 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105486s
	
	
	==> describe nodes <==
	Name:               addons-429840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-429840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-429840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T00_13_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-429840
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-429840"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 00:13:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-429840
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 00:18:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 00:17:40 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 00:17:40 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 00:17:40 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 00:17:40 +0000   Mon, 08 Dec 2025 00:14:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-429840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                18ca2914-c576-4e62-b7ae-ff5b28fdea60
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     cloud-spanner-emulator-5bdddb765-d4kr8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  default                     hello-world-app-5d498dc89-9fzbz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-c4kp7                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  gcp-auth                    gcp-auth-78565c9fb4-sfdxb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-p78l4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m44s
	  kube-system                 coredns-66bc5c9577-vjrlp                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m49s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 csi-hostpathplugin-q66vl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 etcd-addons-429840                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m55s
	  kube-system                 kindnet-zcvnv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m49s
	  kube-system                 kube-apiserver-addons-429840                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-addons-429840       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-proxy-29dtj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-scheduler-addons-429840                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 metrics-server-85b7d694d7-9z5hq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m46s
	  kube-system                 nvidia-device-plugin-daemonset-g6445        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-6b586f9694-p77p6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 registry-creds-764b6fb674-2h5gp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 registry-proxy-9vjr9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 snapshot-controller-7d9fbc56b8-675j4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-rh7x7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  local-path-storage          local-path-provisioner-648f6765c9-8rr8f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2jf6z              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m48s                kube-proxy       
	  Normal   Starting                 5m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-429840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-429840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s (x8 over 5m2s)  kubelet          Node addons-429840 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m55s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m55s                kubelet          Node addons-429840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s                kubelet          Node addons-429840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s                kubelet          Node addons-429840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m51s                node-controller  Node addons-429840 event: Registered Node addons-429840 in Controller
	  Normal   NodeReady                4m8s                 kubelet          Node addons-429840 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 7 23:23] overlayfs: idmapped layers are currently not supported
	[ +23.021914] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:24] overlayfs: idmapped layers are currently not supported
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933] <==
	{"level":"warn","ts":"2025-12-08T00:13:41.583725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.597588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.615867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.652908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.673077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.696673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.713896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.731560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.744252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.766015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.785222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.816267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.828493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.837750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.877786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.899232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.915670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:42.007877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:57.548924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:57.556210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.919995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.934897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.966298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.981269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T00:15:00.233623Z","caller":"traceutil/trace.go:172","msg":"trace[812187976] transaction","detail":"{read_only:false; response_revision:1114; number_of_response:1; }","duration":"160.401499ms","start":"2025-12-08T00:15:00.073198Z","end":"2025-12-08T00:15:00.233599Z","steps":["trace[812187976] 'process raft request'  (duration: 94.432134ms)","trace[812187976] 'compare'  (duration: 65.600032ms)"],"step_count":2}
	
	
	==> gcp-auth [1d51443c465f7b718875deabcaa99ef0a36bb503c3543d483dd8779bcb546f4b] <==
	2025/12/08 00:15:19 GCP Auth Webhook started!
	2025/12/08 00:15:33 Ready to marshal response ...
	2025/12/08 00:15:33 Ready to write response ...
	2025/12/08 00:15:34 Ready to marshal response ...
	2025/12/08 00:15:34 Ready to write response ...
	2025/12/08 00:15:34 Ready to marshal response ...
	2025/12/08 00:15:34 Ready to write response ...
	2025/12/08 00:15:56 Ready to marshal response ...
	2025/12/08 00:15:56 Ready to write response ...
	2025/12/08 00:15:57 Ready to marshal response ...
	2025/12/08 00:15:57 Ready to write response ...
	2025/12/08 00:16:18 Ready to marshal response ...
	2025/12/08 00:16:18 Ready to write response ...
	2025/12/08 00:16:30 Ready to marshal response ...
	2025/12/08 00:16:30 Ready to write response ...
	2025/12/08 00:16:52 Ready to marshal response ...
	2025/12/08 00:16:52 Ready to write response ...
	2025/12/08 00:16:52 Ready to marshal response ...
	2025/12/08 00:16:52 Ready to write response ...
	2025/12/08 00:16:59 Ready to marshal response ...
	2025/12/08 00:16:59 Ready to write response ...
	2025/12/08 00:18:38 Ready to marshal response ...
	2025/12/08 00:18:38 Ready to write response ...
	
	
	==> kernel <==
	 00:18:40 up  5:00,  0 user,  load average: 0.39, 1.10, 1.27
	Linux addons-429840 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1] <==
	I1208 00:16:32.238193       1 main.go:301] handling current node
	I1208 00:16:42.237165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:16:42.237227       1 main.go:301] handling current node
	I1208 00:16:52.237091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:16:52.237941       1 main.go:301] handling current node
	I1208 00:17:02.237420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:02.237455       1 main.go:301] handling current node
	I1208 00:17:12.241240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:12.241277       1 main.go:301] handling current node
	I1208 00:17:22.237126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:22.237232       1 main.go:301] handling current node
	I1208 00:17:32.238154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:32.238191       1 main.go:301] handling current node
	I1208 00:17:42.237289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:42.237333       1 main.go:301] handling current node
	I1208 00:17:52.237163       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:17:52.237200       1 main.go:301] handling current node
	I1208 00:18:02.238133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:18:02.238252       1 main.go:301] handling current node
	I1208 00:18:12.237230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:18:12.237418       1 main.go:301] handling current node
	I1208 00:18:22.237087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:18:22.237120       1 main.go:301] handling current node
	I1208 00:18:32.237192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:18:32.237311       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f] <==
	W1208 00:14:55.781220       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:55.781268       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1208 00:14:55.781282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1208 00:14:55.782269       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:55.782364       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1208 00:14:55.782377       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1208 00:14:56.231467       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:56.231548       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1208 00:14:56.232849       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	E1208 00:14:56.234474       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	E1208 00:14:56.239003       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	I1208 00:14:56.376645       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 00:15:44.513032       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43994: use of closed network connection
	E1208 00:15:44.883022       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44044: use of closed network connection
	I1208 00:16:08.022784       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1208 00:16:09.677932       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1208 00:16:18.174067       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1208 00:16:18.507605       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.21.24"}
	I1208 00:18:38.525636       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.68.237"}
	
	
	==> kube-controller-manager [92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685] <==
	I1208 00:13:49.948430       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1208 00:13:49.948702       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 00:13:49.948727       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 00:13:49.948745       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 00:13:49.948753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 00:13:49.948764       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 00:13:49.958476       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1208 00:13:49.958525       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1208 00:13:49.958544       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1208 00:13:49.958549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1208 00:13:49.958554       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 00:13:49.959784       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 00:13:49.969463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 00:13:49.985094       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-429840" podCIDRs=["10.244.0.0/24"]
	E1208 00:13:54.929154       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1208 00:14:19.912888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1208 00:14:19.913043       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1208 00:14:19.913085       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1208 00:14:19.954897       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1208 00:14:19.959125       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1208 00:14:20.013902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 00:14:20.059958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 00:14:34.937097       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1208 00:14:50.023371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1208 00:14:50.067794       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef] <==
	I1208 00:13:51.698643       1 server_linux.go:53] "Using iptables proxy"
	I1208 00:13:51.786125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 00:13:51.903189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 00:13:51.903227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1208 00:13:51.903313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 00:13:51.971703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 00:13:51.971765       1 server_linux.go:132] "Using iptables Proxier"
	I1208 00:13:51.981462       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 00:13:51.981801       1 server.go:527] "Version info" version="v1.34.2"
	I1208 00:13:51.981825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 00:13:51.988898       1 config.go:200] "Starting service config controller"
	I1208 00:13:51.988930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 00:13:51.988954       1 config.go:106] "Starting endpoint slice config controller"
	I1208 00:13:51.988958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 00:13:51.988970       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 00:13:51.988987       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 00:13:51.995944       1 config.go:309] "Starting node config controller"
	I1208 00:13:51.995969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 00:13:51.995977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 00:13:52.089453       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 00:13:52.089489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 00:13:52.089540       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b] <==
	E1208 00:13:43.001732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 00:13:43.001783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 00:13:43.001836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 00:13:43.001894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 00:13:43.001944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 00:13:43.001991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 00:13:43.002036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 00:13:43.002100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 00:13:43.002157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 00:13:43.002203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 00:13:43.002231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 00:13:43.832991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 00:13:43.841698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 00:13:43.874942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 00:13:43.887957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1208 00:13:43.977802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1208 00:13:44.004296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 00:13:44.055165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 00:13:44.073659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 00:13:44.084500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 00:13:44.105902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 00:13:44.173289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 00:13:44.182981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 00:13:44.200735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1208 00:13:46.732128       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 00:17:45 addons-429840 kubelet[1266]: I1208 00:17:45.698151    1266 scope.go:117] "RemoveContainer" containerID="4da410eabe9651ce8d47a773a5a7c3e4c80d2f5c24a71016d31951d24e4b5a93"
	Dec 08 00:17:45 addons-429840 kubelet[1266]: I1208 00:17:45.712553    1266 scope.go:117] "RemoveContainer" containerID="723b2f8763879b2fa3d80f22b7ca0163da1bcdf2cd014cbb6d01570a36e1335d"
	Dec 08 00:17:45 addons-429840 kubelet[1266]: E1208 00:17:45.749935    1266 manager.go:1116] Failed to create existing container: /docker/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/crio-6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0: Error finding container 6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0: Status 404 returned error can't find the container with id 6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0
	Dec 08 00:17:45 addons-429840 kubelet[1266]: E1208 00:17:45.751942    1266 manager.go:1116] Failed to create existing container: /crio-c95dc1ea1301ce5beb4b9fc1c38cd801aa1f1e69c5a775652caf5d353f840ee5: Error finding container c95dc1ea1301ce5beb4b9fc1c38cd801aa1f1e69c5a775652caf5d353f840ee5: Status 404 returned error can't find the container with id c95dc1ea1301ce5beb4b9fc1c38cd801aa1f1e69c5a775652caf5d353f840ee5
	Dec 08 00:17:45 addons-429840 kubelet[1266]: E1208 00:17:45.753739    1266 manager.go:1116] Failed to create existing container: /crio-6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0: Error finding container 6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0: Status 404 returned error can't find the container with id 6dac79a36dd045ba463d6ebcdafb5a301af862e17a6a07a03fe1dd476545b9e0
	Dec 08 00:17:50 addons-429840 kubelet[1266]: I1208 00:17:50.629156    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2h5gp" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:17:50 addons-429840 kubelet[1266]: I1208 00:17:50.629224    1266 scope.go:117] "RemoveContainer" containerID="ecc46bf2402d34e133c6c8032c803b659bb432c001f4944fafadf8773f019d4b"
	Dec 08 00:17:50 addons-429840 kubelet[1266]: E1208 00:17:50.629408    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2h5gp_kube-system(314d2c2e-10b3-42ca-9055-ee48f7ce3891)\"" pod="kube-system/registry-creds-764b6fb674-2h5gp" podUID="314d2c2e-10b3-42ca-9055-ee48f7ce3891"
	Dec 08 00:17:59 addons-429840 kubelet[1266]: I1208 00:17:59.630633    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-g6445" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:18:04 addons-429840 kubelet[1266]: I1208 00:18:04.628620    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2h5gp" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:18:04 addons-429840 kubelet[1266]: I1208 00:18:04.628698    1266 scope.go:117] "RemoveContainer" containerID="ecc46bf2402d34e133c6c8032c803b659bb432c001f4944fafadf8773f019d4b"
	Dec 08 00:18:04 addons-429840 kubelet[1266]: E1208 00:18:04.628907    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2h5gp_kube-system(314d2c2e-10b3-42ca-9055-ee48f7ce3891)\"" pod="kube-system/registry-creds-764b6fb674-2h5gp" podUID="314d2c2e-10b3-42ca-9055-ee48f7ce3891"
	Dec 08 00:18:17 addons-429840 kubelet[1266]: I1208 00:18:17.629467    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2h5gp" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:18:17 addons-429840 kubelet[1266]: I1208 00:18:17.629979    1266 scope.go:117] "RemoveContainer" containerID="ecc46bf2402d34e133c6c8032c803b659bb432c001f4944fafadf8773f019d4b"
	Dec 08 00:18:18 addons-429840 kubelet[1266]: I1208 00:18:18.214447    1266 scope.go:117] "RemoveContainer" containerID="ecc46bf2402d34e133c6c8032c803b659bb432c001f4944fafadf8773f019d4b"
	Dec 08 00:18:18 addons-429840 kubelet[1266]: I1208 00:18:18.215058    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2h5gp" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:18:18 addons-429840 kubelet[1266]: I1208 00:18:18.215194    1266 scope.go:117] "RemoveContainer" containerID="a747f3bc0b95e86559cbf6fa59995d387872604c1a86c0bd27958a9e8122ab8e"
	Dec 08 00:18:18 addons-429840 kubelet[1266]: E1208 00:18:18.215479    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2h5gp_kube-system(314d2c2e-10b3-42ca-9055-ee48f7ce3891)\"" pod="kube-system/registry-creds-764b6fb674-2h5gp" podUID="314d2c2e-10b3-42ca-9055-ee48f7ce3891"
	Dec 08 00:18:29 addons-429840 kubelet[1266]: I1208 00:18:29.631927    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2h5gp" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:18:29 addons-429840 kubelet[1266]: I1208 00:18:29.632579    1266 scope.go:117] "RemoveContainer" containerID="a747f3bc0b95e86559cbf6fa59995d387872604c1a86c0bd27958a9e8122ab8e"
	Dec 08 00:18:29 addons-429840 kubelet[1266]: E1208 00:18:29.633232    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2h5gp_kube-system(314d2c2e-10b3-42ca-9055-ee48f7ce3891)\"" pod="kube-system/registry-creds-764b6fb674-2h5gp" podUID="314d2c2e-10b3-42ca-9055-ee48f7ce3891"
	Dec 08 00:18:38 addons-429840 kubelet[1266]: I1208 00:18:38.451426    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a5d940f1-ddd4-49a5-883f-1fbffcfbdd22-gcp-creds\") pod \"hello-world-app-5d498dc89-9fzbz\" (UID: \"a5d940f1-ddd4-49a5-883f-1fbffcfbdd22\") " pod="default/hello-world-app-5d498dc89-9fzbz"
	Dec 08 00:18:38 addons-429840 kubelet[1266]: I1208 00:18:38.451485    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmsf4\" (UniqueName: \"kubernetes.io/projected/a5d940f1-ddd4-49a5-883f-1fbffcfbdd22-kube-api-access-cmsf4\") pod \"hello-world-app-5d498dc89-9fzbz\" (UID: \"a5d940f1-ddd4-49a5-883f-1fbffcfbdd22\") " pod="default/hello-world-app-5d498dc89-9fzbz"
	Dec 08 00:18:38 addons-429840 kubelet[1266]: W1208 00:18:38.701962    1266 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/crio-96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71 WatchSource:0}: Error finding container 96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71: Status 404 returned error can't find the container with id 96922e2f78dd377b01883a0b156d675703f450cac726e740cc72a0becace9d71
	Dec 08 00:18:40 addons-429840 kubelet[1266]: I1208 00:18:40.340953    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-9fzbz" podStartSLOduration=1.7514042440000002 podStartE2EDuration="2.340927206s" podCreationTimestamp="2025-12-08 00:18:38 +0000 UTC" firstStartedPulling="2025-12-08 00:18:38.707660602 +0000 UTC m=+293.219705677" lastFinishedPulling="2025-12-08 00:18:39.297183564 +0000 UTC m=+293.809228639" observedRunningTime="2025-12-08 00:18:40.315047203 +0000 UTC m=+294.827092278" watchObservedRunningTime="2025-12-08 00:18:40.340927206 +0000 UTC m=+294.852972289"
	
	
	==> storage-provisioner [f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c] <==
	W1208 00:18:16.627483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:18.631086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:18.635880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:20.638576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:20.643201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:22.646811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:22.651546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:24.655348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:24.662441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:26.666659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:26.671493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:28.674451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:28.681374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:30.684571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:30.689256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:32.692207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:32.698913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:34.702254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:34.706953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:36.710440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:36.717937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:38.721300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:38.734306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:40.737977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:18:40.742672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-429840 -n addons-429840
helpers_test.go:269: (dbg) Run:  kubectl --context addons-429840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-429840 describe pod ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-429840 describe pod ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7: exit status 1 (92.958413ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-226t7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qqch7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-429840 describe pod ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (292.376603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:18:41.749789  802392 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:18:41.750711  802392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:18:41.750745  802392 out.go:374] Setting ErrFile to fd 2...
	I1208 00:18:41.750764  802392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:18:41.751107  802392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:18:41.751445  802392 mustload.go:66] Loading cluster: addons-429840
	I1208 00:18:41.751863  802392 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:18:41.751899  802392 addons.go:622] checking whether the cluster is paused
	I1208 00:18:41.752040  802392 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:18:41.752066  802392 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:18:41.752613  802392 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:18:41.775113  802392 ssh_runner.go:195] Run: systemctl --version
	I1208 00:18:41.775170  802392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:18:41.798578  802392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:18:41.905471  802392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:18:41.905579  802392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:18:41.935490  802392 cri.go:89] found id: "a747f3bc0b95e86559cbf6fa59995d387872604c1a86c0bd27958a9e8122ab8e"
	I1208 00:18:41.935510  802392 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:18:41.935516  802392 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:18:41.935519  802392 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:18:41.935523  802392 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:18:41.935526  802392 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:18:41.935529  802392 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:18:41.935533  802392 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:18:41.935536  802392 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:18:41.935543  802392 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:18:41.935547  802392 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:18:41.935551  802392 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:18:41.935555  802392 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:18:41.935558  802392 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:18:41.935562  802392 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:18:41.935570  802392 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:18:41.935576  802392 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:18:41.935580  802392 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:18:41.935584  802392 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:18:41.935587  802392 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:18:41.935592  802392 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:18:41.935595  802392 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:18:41.935598  802392 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:18:41.935601  802392 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:18:41.935608  802392 cri.go:89] found id: ""
	I1208 00:18:41.935662  802392 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:18:41.950245  802392 out.go:203] 
	W1208 00:18:41.953202  802392 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:18:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:18:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:18:41.953227  802392 out.go:285] * 
	* 
	W1208 00:18:41.959769  802392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:18:41.962813  802392 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable ingress --alsologtostderr -v=1: exit status 11 (308.522271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:18:42.032263  802505 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:18:42.032999  802505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:18:42.033014  802505 out.go:374] Setting ErrFile to fd 2...
	I1208 00:18:42.033021  802505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:18:42.033302  802505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:18:42.033606  802505 mustload.go:66] Loading cluster: addons-429840
	I1208 00:18:42.034005  802505 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:18:42.034025  802505 addons.go:622] checking whether the cluster is paused
	I1208 00:18:42.034163  802505 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:18:42.034179  802505 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:18:42.034723  802505 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:18:42.054001  802505 ssh_runner.go:195] Run: systemctl --version
	I1208 00:18:42.054069  802505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:18:42.073816  802505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:18:42.191732  802505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:18:42.191849  802505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:18:42.231829  802505 cri.go:89] found id: "a747f3bc0b95e86559cbf6fa59995d387872604c1a86c0bd27958a9e8122ab8e"
	I1208 00:18:42.231864  802505 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:18:42.231869  802505 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:18:42.231874  802505 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:18:42.231877  802505 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:18:42.231881  802505 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:18:42.231885  802505 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:18:42.231888  802505 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:18:42.231911  802505 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:18:42.231919  802505 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:18:42.231923  802505 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:18:42.231926  802505 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:18:42.231929  802505 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:18:42.231932  802505 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:18:42.231936  802505 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:18:42.231949  802505 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:18:42.231962  802505 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:18:42.231970  802505 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:18:42.231986  802505 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:18:42.231991  802505 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:18:42.232003  802505 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:18:42.232007  802505 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:18:42.232011  802505 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:18:42.232014  802505 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:18:42.232025  802505 cri.go:89] found id: ""
	I1208 00:18:42.232096  802505 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:18:42.254593  802505 out.go:203] 
	W1208 00:18:42.257750  802505 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:18:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:18:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:18:42.257792  802505 out.go:285] * 
	* 
	W1208 00:18:42.267509  802505 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:18:42.270925  802505 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-c4kp7" [56a50370-dbdc-4e22-97aa-6377fb2e7724] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003930092s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (263.891983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:17.651054  800007 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:17.651973  800007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:17.651989  800007 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:17.651995  800007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:17.652381  800007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:17.652742  800007 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:17.653401  800007 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:17.653417  800007 addons.go:622] checking whether the cluster is paused
	I1208 00:16:17.653544  800007 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:17.653562  800007 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:17.654296  800007 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:17.672006  800007 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:17.672069  800007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:17.693524  800007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:17.797329  800007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:17.797427  800007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:17.827137  800007 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:17.827163  800007 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:17.827174  800007 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:17.827178  800007 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:17.827187  800007 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:17.827192  800007 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:17.827195  800007 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:17.827198  800007 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:17.827202  800007 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:17.827210  800007 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:17.827214  800007 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:17.827218  800007 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:17.827222  800007 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:17.827225  800007 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:17.827228  800007 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:17.827234  800007 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:17.827240  800007 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:17.827245  800007 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:17.827249  800007 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:17.827252  800007 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:17.827259  800007 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:17.827262  800007 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:17.827265  800007 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:17.827268  800007 cri.go:89] found id: ""
	I1208 00:16:17.827371  800007 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:17.842514  800007 out.go:203] 
	W1208 00:16:17.845467  800007 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:17.845492  800007 out.go:285] * 
	* 
	W1208 00:16:17.851857  800007 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:17.854794  800007 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.276637ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003817119s
addons_test.go:463: (dbg) Run:  kubectl --context addons-429840 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (288.103967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:11.363594  799907 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:11.364318  799907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:11.364331  799907 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:11.364337  799907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:11.364727  799907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:11.365083  799907 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:11.365713  799907 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:11.365727  799907 addons.go:622] checking whether the cluster is paused
	I1208 00:16:11.365856  799907 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:11.365866  799907 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:11.366681  799907 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:11.383974  799907 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:11.384029  799907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:11.411132  799907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:11.521504  799907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:11.521596  799907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:11.551246  799907 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:11.551270  799907 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:11.551276  799907 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:11.551280  799907 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:11.551283  799907 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:11.551288  799907 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:11.551308  799907 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:11.551319  799907 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:11.551324  799907 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:11.551331  799907 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:11.551338  799907 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:11.551341  799907 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:11.551344  799907 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:11.551347  799907 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:11.551351  799907 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:11.551361  799907 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:11.551370  799907 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:11.551388  799907 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:11.551392  799907 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:11.551404  799907 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:11.551410  799907 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:11.551417  799907 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:11.551423  799907 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:11.551426  799907 cri.go:89] found id: ""
	I1208 00:16:11.551486  799907 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:11.572625  799907 out.go:203] 
	W1208 00:16:11.575480  799907 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:11.575503  799907 out.go:285] * 
	* 
	W1208 00:16:11.581908  799907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:11.584947  799907 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1208 00:15:48.509401  791807 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1208 00:15:48.512485  791807 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1208 00:15:48.512508  791807 kapi.go:107] duration metric: took 3.120131ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.130133ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-429840 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-429840 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d67ea09c-4241-4155-95b9-256472651a21] Pending
helpers_test.go:352: "task-pv-pod" [d67ea09c-4241-4155-95b9-256472651a21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d67ea09c-4241-4155-95b9-256472651a21] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004115003s
addons_test.go:572: (dbg) Run:  kubectl --context addons-429840 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-429840 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-429840 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-429840 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-429840 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-429840 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-429840 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6d629009-cd18-4848-b8ce-bfd406497c49] Pending
helpers_test.go:352: "task-pv-pod-restore" [6d629009-cd18-4848-b8ce-bfd406497c49] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6d629009-cd18-4848-b8ce-bfd406497c49] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004042379s
addons_test.go:614: (dbg) Run:  kubectl --context addons-429840 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-429840 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-429840 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (264.479318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:39.238916  800659 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:39.240042  800659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.240061  800659 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:39.240067  800659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.240352  800659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:39.240734  800659 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:39.241137  800659 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.241153  800659 addons.go:622] checking whether the cluster is paused
	I1208 00:16:39.241301  800659 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.241320  800659 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:39.242058  800659 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:39.259890  800659 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:39.259948  800659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:39.277266  800659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:39.381294  800659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:39.381393  800659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:39.413804  800659 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:39.413831  800659 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:39.413837  800659 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:39.413840  800659 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:39.413843  800659 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:39.413847  800659 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:39.413850  800659 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:39.413853  800659 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:39.413856  800659 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:39.413862  800659 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:39.413866  800659 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:39.413869  800659 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:39.413872  800659 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:39.413875  800659 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:39.413878  800659 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:39.413883  800659 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:39.413890  800659 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:39.413894  800659 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:39.413898  800659 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:39.413900  800659 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:39.413905  800659 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:39.413910  800659 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:39.413913  800659 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:39.413916  800659 cri.go:89] found id: ""
	I1208 00:16:39.413970  800659 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:39.430015  800659 out.go:203] 
	W1208 00:16:39.433145  800659 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:39.433170  800659 out.go:285] * 
	* 
	W1208 00:16:39.439546  800659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:39.442500  800659 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (274.888189ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:39.500302  800705 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:39.501682  800705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.501699  800705 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:39.501706  800705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:39.501966  800705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:39.502393  800705 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:39.502787  800705 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.502806  800705 addons.go:622] checking whether the cluster is paused
	I1208 00:16:39.502958  800705 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:39.502977  800705 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:39.503529  800705 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:39.524929  800705 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:39.524984  800705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:39.541936  800705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:39.657439  800705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:39.657529  800705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:39.689550  800705 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:39.689570  800705 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:39.689575  800705 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:39.689578  800705 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:39.689594  800705 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:39.689598  800705 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:39.689601  800705 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:39.689605  800705 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:39.689608  800705 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:39.689615  800705 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:39.689618  800705 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:39.689621  800705 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:39.689625  800705 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:39.689628  800705 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:39.689631  800705 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:39.689640  800705 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:39.689643  800705 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:39.689647  800705 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:39.689651  800705 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:39.689654  800705 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:39.689658  800705 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:39.689662  800705 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:39.689665  800705 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:39.689668  800705 cri.go:89] found id: ""
	I1208 00:16:39.689720  800705 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:39.705155  800705 out.go:203] 
	W1208 00:16:39.708278  800705 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:39.708299  800705 out.go:285] * 
	* 
	W1208 00:16:39.714645  800705 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:39.717538  800705 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (51.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-429840 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-429840 --alsologtostderr -v=1: exit status 11 (301.41203ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:15:45.316079  798886 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:15:45.316919  798886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:45.316968  798886 out.go:374] Setting ErrFile to fd 2...
	I1208 00:15:45.317000  798886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:45.317785  798886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:15:45.318203  798886 mustload.go:66] Loading cluster: addons-429840
	I1208 00:15:45.318671  798886 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:45.318713  798886 addons.go:622] checking whether the cluster is paused
	I1208 00:15:45.318903  798886 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:45.318938  798886 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:15:45.319559  798886 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:15:45.339172  798886 ssh_runner.go:195] Run: systemctl --version
	I1208 00:15:45.339261  798886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:15:45.364623  798886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:15:45.481551  798886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:15:45.481647  798886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:15:45.510112  798886 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:15:45.510133  798886 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:15:45.510138  798886 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:15:45.510141  798886 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:15:45.510145  798886 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:15:45.510164  798886 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:15:45.510167  798886 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:15:45.510170  798886 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:15:45.510173  798886 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:15:45.510180  798886 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:15:45.510183  798886 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:15:45.510186  798886 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:15:45.510189  798886 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:15:45.510191  798886 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:15:45.510194  798886 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:15:45.510199  798886 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:15:45.510202  798886 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:15:45.510206  798886 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:15:45.510208  798886 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:15:45.510212  798886 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:15:45.510216  798886 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:15:45.510219  798886 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:15:45.510222  798886 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:15:45.510225  798886 cri.go:89] found id: ""
	I1208 00:15:45.510275  798886 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:15:45.525120  798886 out.go:203] 
	W1208 00:15:45.527968  798886 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:15:45.527998  798886 out.go:285] * 
	* 
	W1208 00:15:45.534412  798886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:15:45.537291  798886 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-429840 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-429840
helpers_test.go:243: (dbg) docker inspect addons-429840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e",
	        "Created": "2025-12-08T00:13:22.039633847Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 793218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:13:22.099748278Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e-json.log",
	        "Name": "/addons-429840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-429840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-429840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e",
	                "LowerDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c8d1c50da4547e80da9f6279e748eb3157185942f63e091cb4f2afe86346d07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-429840",
	                "Source": "/var/lib/docker/volumes/addons-429840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-429840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-429840",
	                "name.minikube.sigs.k8s.io": "addons-429840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5e6bd5e9d9ab86569e41be1f9f0db050fe640dc268b6fe00540a5eeb375bd69",
	            "SandboxKey": "/var/run/docker/netns/e5e6bd5e9d9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-429840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:87:c5:f5:67:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c9d5a0c597ead7d779322ea2df113cf05b50efef1f467d1495dcf34843407b4d",
	                    "EndpointID": "b44c7109c4ba972bab0ade4dd76749da0756227a4eaaf162adcd0969bb8947c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-429840",
	                        "4788dff0a9c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-429840 -n addons-429840
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-429840 logs -n 25: (1.468293907s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-177412   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-177412                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-177412   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-931286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-931286   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-931286                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-931286   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-670892 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-670892   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-670892                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-670892   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-177412                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-177412   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-931286                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-931286   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-670892                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-670892   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ --download-only -p download-docker-748036 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-748036 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ -p download-docker-748036                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-748036 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ --download-only -p binary-mirror-361883 --alsologtostderr --binary-mirror http://127.0.0.1:39527 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-361883   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ -p binary-mirror-361883                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-361883   │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ addons  │ disable dashboard -p addons-429840                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-429840                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ start   │ -p addons-429840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:15 UTC │
	│ addons  │ addons-429840 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ addons  │ addons-429840 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-429840 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-429840          │ jenkins │ v1.37.0 │ 08 Dec 25 00:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:58
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:58.086654  792815 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:58.086829  792815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:58.086886  792815 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:58.086900  792815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:58.087178  792815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:12:58.087689  792815 out.go:368] Setting JSON to false
	I1208 00:12:58.088649  792815 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17710,"bootTime":1765135068,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:12:58.088719  792815 start.go:143] virtualization:  
	I1208 00:12:58.092166  792815 out.go:179] * [addons-429840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:12:58.095237  792815 notify.go:221] Checking for updates...
	I1208 00:12:58.095804  792815 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:12:58.098982  792815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:58.102050  792815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:12:58.104917  792815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:12:58.107706  792815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:12:58.110497  792815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:12:58.113780  792815 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:58.139785  792815 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:58.139908  792815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:58.198511  792815 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:58.189243732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:58.198617  792815 docker.go:319] overlay module found
	I1208 00:12:58.201653  792815 out.go:179] * Using the docker driver based on user configuration
	I1208 00:12:58.204469  792815 start.go:309] selected driver: docker
	I1208 00:12:58.204491  792815 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:58.204505  792815 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:12:58.205259  792815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:58.269254  792815 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:58.260156382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:58.269424  792815 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:58.269652  792815 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:12:58.272550  792815 out.go:179] * Using Docker driver with root privileges
	I1208 00:12:58.275291  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:12:58.275365  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:12:58.275378  792815 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:12:58.275459  792815 start.go:353] cluster config:
	{Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1208 00:12:58.278490  792815 out.go:179] * Starting "addons-429840" primary control-plane node in "addons-429840" cluster
	I1208 00:12:58.281247  792815 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:12:58.284050  792815 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:12:58.286904  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:12:58.286952  792815 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 00:12:58.286966  792815 cache.go:65] Caching tarball of preloaded images
	I1208 00:12:58.286975  792815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:12:58.287058  792815 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:12:58.287069  792815 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 00:12:58.287456  792815 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json ...
	I1208 00:12:58.287488  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json: {Name:mkdd8650adb0bf4e186015e5cc2e904609ad2ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:12:58.302295  792815 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:58.302421  792815 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:12:58.302440  792815 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1208 00:12:58.302444  792815 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1208 00:12:58.302451  792815 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1208 00:12:58.302455  792815 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1208 00:13:16.314475  792815 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1208 00:13:16.314515  792815 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:13:16.314558  792815 start.go:360] acquireMachinesLock for addons-429840: {Name:mk6b903fc45d259c022d88310f1d219bc2e845f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:13:16.314701  792815 start.go:364] duration metric: took 118.672µs to acquireMachinesLock for "addons-429840"
	I1208 00:13:16.314731  792815 start.go:93] Provisioning new machine with config: &{Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:13:16.314800  792815 start.go:125] createHost starting for "" (driver="docker")
	I1208 00:13:16.318305  792815 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1208 00:13:16.318555  792815 start.go:159] libmachine.API.Create for "addons-429840" (driver="docker")
	I1208 00:13:16.318592  792815 client.go:173] LocalClient.Create starting
	I1208 00:13:16.318703  792815 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 00:13:16.477433  792815 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 00:13:16.795524  792815 cli_runner.go:164] Run: docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 00:13:16.810865  792815 cli_runner.go:211] docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 00:13:16.810967  792815 network_create.go:284] running [docker network inspect addons-429840] to gather additional debugging logs...
	I1208 00:13:16.810988  792815 cli_runner.go:164] Run: docker network inspect addons-429840
	W1208 00:13:16.826292  792815 cli_runner.go:211] docker network inspect addons-429840 returned with exit code 1
	I1208 00:13:16.826333  792815 network_create.go:287] error running [docker network inspect addons-429840]: docker network inspect addons-429840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-429840 not found
	I1208 00:13:16.826348  792815 network_create.go:289] output of [docker network inspect addons-429840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-429840 not found
	
	** /stderr **
	I1208 00:13:16.826455  792815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:13:16.844630  792815 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aa6a50}
	I1208 00:13:16.844678  792815 network_create.go:124] attempt to create docker network addons-429840 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 00:13:16.844742  792815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-429840 addons-429840
	I1208 00:13:16.904103  792815 network_create.go:108] docker network addons-429840 192.168.49.0/24 created
	I1208 00:13:16.904136  792815 kic.go:121] calculated static IP "192.168.49.2" for the "addons-429840" container
	I1208 00:13:16.904226  792815 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 00:13:16.919437  792815 cli_runner.go:164] Run: docker volume create addons-429840 --label name.minikube.sigs.k8s.io=addons-429840 --label created_by.minikube.sigs.k8s.io=true
	I1208 00:13:16.936341  792815 oci.go:103] Successfully created a docker volume addons-429840
	I1208 00:13:16.936432  792815 cli_runner.go:164] Run: docker run --rm --name addons-429840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --entrypoint /usr/bin/test -v addons-429840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 00:13:17.981899  792815 cli_runner.go:217] Completed: docker run --rm --name addons-429840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --entrypoint /usr/bin/test -v addons-429840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.045390598s)
	I1208 00:13:17.981932  792815 oci.go:107] Successfully prepared a docker volume addons-429840
	I1208 00:13:17.981981  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:13:17.982003  792815 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 00:13:17.982092  792815 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-429840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 00:13:21.964017  792815 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-429840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.981886472s)
	I1208 00:13:21.964051  792815 kic.go:203] duration metric: took 3.982045153s to extract preloaded images to volume ...
	W1208 00:13:21.964202  792815 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 00:13:21.964301  792815 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 00:13:22.024145  792815 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-429840 --name addons-429840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-429840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-429840 --network addons-429840 --ip 192.168.49.2 --volume addons-429840:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 00:13:22.345963  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Running}}
	I1208 00:13:22.369013  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.395290  792815 cli_runner.go:164] Run: docker exec addons-429840 stat /var/lib/dpkg/alternatives/iptables
	I1208 00:13:22.444981  792815 oci.go:144] the created container "addons-429840" has a running status.
	I1208 00:13:22.445015  792815 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa...
	I1208 00:13:22.585788  792815 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 00:13:22.607315  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.629341  792815 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 00:13:22.629365  792815 kic_runner.go:114] Args: [docker exec --privileged addons-429840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 00:13:22.702326  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:22.721610  792815 machine.go:94] provisionDockerMachine start ...
	I1208 00:13:22.721714  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:22.739485  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:22.739811  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:22.739825  792815 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:13:22.740529  792815 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47132->127.0.0.1:33493: read: connection reset by peer
	I1208 00:13:25.890368  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-429840
	
	I1208 00:13:25.890391  792815 ubuntu.go:182] provisioning hostname "addons-429840"
	I1208 00:13:25.890457  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:25.907496  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:25.907835  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:25.907853  792815 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-429840 && echo "addons-429840" | sudo tee /etc/hostname
	I1208 00:13:26.068552  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-429840
	
	I1208 00:13:26.068640  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.086659  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:26.087062  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:26.087083  792815 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-429840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-429840/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-429840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:13:26.238951  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:13:26.238980  792815 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:13:26.239013  792815 ubuntu.go:190] setting up certificates
	I1208 00:13:26.239027  792815 provision.go:84] configureAuth start
	I1208 00:13:26.239100  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:26.255450  792815 provision.go:143] copyHostCerts
	I1208 00:13:26.255533  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:13:26.255667  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:13:26.255727  792815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:13:26.255778  792815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.addons-429840 san=[127.0.0.1 192.168.49.2 addons-429840 localhost minikube]
	I1208 00:13:26.365519  792815 provision.go:177] copyRemoteCerts
	I1208 00:13:26.365595  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:13:26.365639  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.381644  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:26.486543  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:13:26.504160  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 00:13:26.522593  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:13:26.540098  792815 provision.go:87] duration metric: took 301.04687ms to configureAuth
	I1208 00:13:26.540168  792815 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:13:26.540401  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:26.540518  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.557193  792815 main.go:143] libmachine: Using SSH client type: native
	I1208 00:13:26.557500  792815 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1208 00:13:26.557519  792815 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:13:26.866482  792815 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:13:26.866508  792815 machine.go:97] duration metric: took 4.144879175s to provisionDockerMachine
	I1208 00:13:26.866518  792815 client.go:176] duration metric: took 10.547916822s to LocalClient.Create
	I1208 00:13:26.866532  792815 start.go:167] duration metric: took 10.547978755s to libmachine.API.Create "addons-429840"
	I1208 00:13:26.866538  792815 start.go:293] postStartSetup for "addons-429840" (driver="docker")
	I1208 00:13:26.866549  792815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:13:26.866612  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:13:26.866658  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:26.884226  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:26.990521  792815 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:13:26.993577  792815 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:13:26.993611  792815 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:13:26.993622  792815 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:13:26.993689  792815 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:13:26.993718  792815 start.go:296] duration metric: took 127.173551ms for postStartSetup
	I1208 00:13:26.994028  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:27.013906  792815 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/config.json ...
	I1208 00:13:27.014259  792815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:13:27.014305  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.031667  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.135983  792815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:13:27.140788  792815 start.go:128] duration metric: took 10.82597119s to createHost
	I1208 00:13:27.140811  792815 start.go:83] releasing machines lock for "addons-429840", held for 10.826097772s
	I1208 00:13:27.140886  792815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-429840
	I1208 00:13:27.157644  792815 ssh_runner.go:195] Run: cat /version.json
	I1208 00:13:27.157698  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.157723  792815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:13:27.157793  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:27.176307  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.198351  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:27.278580  792815 ssh_runner.go:195] Run: systemctl --version
	I1208 00:13:27.370912  792815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:13:27.407918  792815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:13:27.412083  792815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:13:27.412158  792815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:13:27.440356  792815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 00:13:27.440380  792815 start.go:496] detecting cgroup driver to use...
	I1208 00:13:27.440413  792815 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:13:27.440462  792815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:13:27.458951  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:13:27.471263  792815 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:13:27.471325  792815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:13:27.488732  792815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:13:27.507459  792815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:13:27.622654  792815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:13:27.740910  792815 docker.go:234] disabling docker service ...
	I1208 00:13:27.741022  792815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:13:27.761528  792815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:13:27.774428  792815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:13:27.893954  792815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:13:28.013704  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:13:28.027014  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:13:28.041586  792815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:13:28.041669  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.050956  792815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:13:28.051066  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.060124  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.069059  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.078041  792815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:13:28.086016  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.095515  792815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.109067  792815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:13:28.117988  792815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:13:28.125756  792815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:13:28.133217  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:28.251789  792815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:13:28.421453  792815 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:13:28.421571  792815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:13:28.425303  792815 start.go:564] Will wait 60s for crictl version
	I1208 00:13:28.425391  792815 ssh_runner.go:195] Run: which crictl
	I1208 00:13:28.428767  792815 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:13:28.452538  792815 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:13:28.452682  792815 ssh_runner.go:195] Run: crio --version
	I1208 00:13:28.480776  792815 ssh_runner.go:195] Run: crio --version
	I1208 00:13:28.510887  792815 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 00:13:28.513704  792815 cli_runner.go:164] Run: docker network inspect addons-429840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:13:28.528136  792815 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:13:28.531986  792815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:13:28.541463  792815 kubeadm.go:884] updating cluster {Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:13:28.541585  792815 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:13:28.541649  792815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:13:28.574230  792815 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:13:28.574265  792815 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:13:28.574322  792815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:13:28.598366  792815 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:13:28.598388  792815 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:13:28.598395  792815 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1208 00:13:28.598481  792815 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-429840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:13:28.598569  792815 ssh_runner.go:195] Run: crio config
	I1208 00:13:28.671663  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:13:28.671689  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:13:28.671716  792815 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:13:28.671741  792815 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-429840 NodeName:addons-429840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:13:28.671870  792815 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-429840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:13:28.671949  792815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 00:13:28.679651  792815 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:13:28.679739  792815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:13:28.687320  792815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1208 00:13:28.700053  792815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 00:13:28.712968  792815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1208 00:13:28.725694  792815 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:13:28.729182  792815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:13:28.738820  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:28.857054  792815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:13:28.872825  792815 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840 for IP: 192.168.49.2
	I1208 00:13:28.872892  792815 certs.go:195] generating shared ca certs ...
	I1208 00:13:28.872923  792815 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:28.873085  792815 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:13:29.119830  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt ...
	I1208 00:13:29.119865  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt: {Name:mk1cf232fd20a2ae24bd50dbd542c389d0d66187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.120074  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key ...
	I1208 00:13:29.120088  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key: {Name:mk9c510fcf2ada02d3cca2ea71edca904ff4699f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.120175  792815 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:13:29.309653  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt ...
	I1208 00:13:29.309687  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt: {Name:mkdffa916881131a76043035720c06d3bb1d8b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.309872  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key ...
	I1208 00:13:29.309886  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key: {Name:mk7e31d42fb266508928bf35f3347873ccd52074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.309989  792815 certs.go:257] generating profile certs ...
	I1208 00:13:29.310049  792815 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key
	I1208 00:13:29.310064  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt with IP's: []
	I1208 00:13:29.443820  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt ...
	I1208 00:13:29.443851  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: {Name:mk5ad7c34d54d7c05122259765e9864cc409f97c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.444032  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key ...
	I1208 00:13:29.444045  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.key: {Name:mk14ffc1607ea261e62d795566b07b2bf6abae1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.444124  792815 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e
	I1208 00:13:29.444144  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1208 00:13:29.678576  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e ...
	I1208 00:13:29.678608  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e: {Name:mka44f27223477651c3a6f063e74685ca2941c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.678779  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e ...
	I1208 00:13:29.678794  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e: {Name:mka77b1e4f7987fc0c84b9659704fd9b5a8aba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:29.678897  792815 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt.bcb1a79e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt
	I1208 00:13:29.678980  792815 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key.bcb1a79e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key
	I1208 00:13:29.679032  792815 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key
	I1208 00:13:29.679052  792815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt with IP's: []
	I1208 00:13:30.038062  792815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt ...
	I1208 00:13:30.038103  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt: {Name:mkf969471ee6ea587184950d7175a9fb73a26f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:30.038298  792815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key ...
	I1208 00:13:30.038308  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key: {Name:mkb9bfe0b781f5b511702d33b0a7dadc83334f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:30.038500  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:13:30.038541  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:13:30.038568  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:13:30.038602  792815 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:13:30.039258  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:13:30.063178  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:13:30.085264  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:13:30.105681  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:13:30.125422  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 00:13:30.145609  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:13:30.164632  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:13:30.184323  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:13:30.203443  792815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:13:30.222860  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:13:30.236256  792815 ssh_runner.go:195] Run: openssl version
	I1208 00:13:30.242448  792815 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.250475  792815 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:13:30.258276  792815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.262055  792815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.262133  792815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:13:30.308127  792815 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:13:30.315644  792815 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 00:13:30.323018  792815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:13:30.326489  792815 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 00:13:30.326569  792815 kubeadm.go:401] StartCluster: {Name:addons-429840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-429840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:13:30.326665  792815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:13:30.326733  792815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:13:30.353613  792815 cri.go:89] found id: ""
	I1208 00:13:30.353689  792815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:13:30.361543  792815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:13:30.369264  792815 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:13:30.369370  792815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:13:30.376937  792815 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:13:30.376957  792815 kubeadm.go:158] found existing configuration files:
	
	I1208 00:13:30.377007  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 00:13:30.384371  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:13:30.384437  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:13:30.391688  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 00:13:30.399360  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:13:30.399475  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:13:30.407103  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 00:13:30.414728  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:13:30.414817  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:13:30.422112  792815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 00:13:30.429747  792815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:13:30.429861  792815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:13:30.437525  792815 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:13:30.480107  792815 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 00:13:30.480564  792815 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:13:30.508937  792815 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:13:30.509084  792815 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:13:30.509154  792815 kubeadm.go:319] OS: Linux
	I1208 00:13:30.509239  792815 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:13:30.509316  792815 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:13:30.509401  792815 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:13:30.509507  792815 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:13:30.509598  792815 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:13:30.509678  792815 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:13:30.509768  792815 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:13:30.509824  792815 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:13:30.509874  792815 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:13:30.589363  792815 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:13:30.589480  792815 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:13:30.589575  792815 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:13:30.598132  792815 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:13:30.604836  792815 out.go:252]   - Generating certificates and keys ...
	I1208 00:13:30.604934  792815 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:13:30.605005  792815 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:13:31.303268  792815 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 00:13:31.502142  792815 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 00:13:31.930905  792815 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 00:13:32.417587  792815 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 00:13:32.877387  792815 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 00:13:32.877526  792815 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-429840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:13:33.281408  792815 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 00:13:33.281689  792815 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-429840 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:13:34.461418  792815 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 00:13:34.839197  792815 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 00:13:35.372257  792815 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 00:13:35.372504  792815 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:13:35.748957  792815 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:13:36.201004  792815 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:13:36.344547  792815 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:13:37.518302  792815 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:13:37.643089  792815 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:13:37.643856  792815 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:13:37.646612  792815 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:13:37.650134  792815 out.go:252]   - Booting up control plane ...
	I1208 00:13:37.650258  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:13:37.650347  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:13:37.650424  792815 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:13:37.668167  792815 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:13:37.668317  792815 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:13:37.676003  792815 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:13:37.676437  792815 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:13:37.676774  792815 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:13:37.811880  792815 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:13:37.812000  792815 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:13:38.313152  792815 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.428113ms
	I1208 00:13:38.316517  792815 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 00:13:38.316609  792815 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1208 00:13:38.316912  792815 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 00:13:38.317003  792815 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 00:13:40.729530  792815 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.412604137s
	I1208 00:13:42.955666  792815 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.639026829s
	I1208 00:13:44.819027  792815 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502324534s
	I1208 00:13:44.852574  792815 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 00:13:44.866728  792815 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 00:13:44.879265  792815 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 00:13:44.879500  792815 kubeadm.go:319] [mark-control-plane] Marking the node addons-429840 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 00:13:44.898901  792815 kubeadm.go:319] [bootstrap-token] Using token: s77b7b.z832n76eowpm6ufx
	I1208 00:13:44.901864  792815 out.go:252]   - Configuring RBAC rules ...
	I1208 00:13:44.902054  792815 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 00:13:44.909304  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 00:13:44.919708  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 00:13:44.928335  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 00:13:44.935951  792815 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 00:13:44.940093  792815 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 00:13:45.238194  792815 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 00:13:45.661199  792815 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 00:13:46.226175  792815 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 00:13:46.227428  792815 kubeadm.go:319] 
	I1208 00:13:46.227501  792815 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 00:13:46.227514  792815 kubeadm.go:319] 
	I1208 00:13:46.227592  792815 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 00:13:46.227600  792815 kubeadm.go:319] 
	I1208 00:13:46.227624  792815 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 00:13:46.227686  792815 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 00:13:46.227740  792815 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 00:13:46.227748  792815 kubeadm.go:319] 
	I1208 00:13:46.227804  792815 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 00:13:46.227812  792815 kubeadm.go:319] 
	I1208 00:13:46.227859  792815 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 00:13:46.227865  792815 kubeadm.go:319] 
	I1208 00:13:46.227916  792815 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 00:13:46.227994  792815 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 00:13:46.228065  792815 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 00:13:46.228073  792815 kubeadm.go:319] 
	I1208 00:13:46.228157  792815 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 00:13:46.228236  792815 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 00:13:46.228244  792815 kubeadm.go:319] 
	I1208 00:13:46.228345  792815 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s77b7b.z832n76eowpm6ufx \
	I1208 00:13:46.228463  792815 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 00:13:46.228487  792815 kubeadm.go:319] 	--control-plane 
	I1208 00:13:46.228498  792815 kubeadm.go:319] 
	I1208 00:13:46.228582  792815 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 00:13:46.228591  792815 kubeadm.go:319] 
	I1208 00:13:46.228672  792815 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s77b7b.z832n76eowpm6ufx \
	I1208 00:13:46.228778  792815 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 00:13:46.232805  792815 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 00:13:46.233040  792815 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:13:46.233146  792815 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:13:46.233163  792815 cni.go:84] Creating CNI manager for ""
	I1208 00:13:46.233178  792815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:13:46.236388  792815 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 00:13:46.239158  792815 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 00:13:46.243229  792815 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 00:13:46.243251  792815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 00:13:46.256138  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 00:13:46.568869  792815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 00:13:46.569063  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:46.569168  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-429840 minikube.k8s.io/updated_at=2025_12_08T00_13_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-429840 minikube.k8s.io/primary=true
	I1208 00:13:46.825361  792815 ops.go:34] apiserver oom_adj: -16
	I1208 00:13:46.825489  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:47.326155  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:47.826320  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:48.326263  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:48.826322  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:49.325598  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:49.826182  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:50.325555  792815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 00:13:50.408203  792815 kubeadm.go:1114] duration metric: took 3.839198524s to wait for elevateKubeSystemPrivileges
	I1208 00:13:50.408236  792815 kubeadm.go:403] duration metric: took 20.081670133s to StartCluster
	I1208 00:13:50.408256  792815 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:50.408377  792815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:13:50.408762  792815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:13:50.408968  792815 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:13:50.409102  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 00:13:50.409347  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:50.409386  792815 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1208 00:13:50.409461  792815 addons.go:70] Setting yakd=true in profile "addons-429840"
	I1208 00:13:50.409478  792815 addons.go:239] Setting addon yakd=true in "addons-429840"
	I1208 00:13:50.409501  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.409950  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.410166  792815 addons.go:70] Setting inspektor-gadget=true in profile "addons-429840"
	I1208 00:13:50.410189  792815 addons.go:239] Setting addon inspektor-gadget=true in "addons-429840"
	I1208 00:13:50.410211  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.410621  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.410937  792815 addons.go:70] Setting metrics-server=true in profile "addons-429840"
	I1208 00:13:50.410960  792815 addons.go:239] Setting addon metrics-server=true in "addons-429840"
	I1208 00:13:50.410999  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.411466  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.414911  792815 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-429840"
	I1208 00:13:50.414947  792815 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-429840"
	I1208 00:13:50.414981  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.415532  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.416483  792815 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-429840"
	I1208 00:13:50.416573  792815 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-429840"
	I1208 00:13:50.416636  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.417232  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429695  792815 addons.go:70] Setting cloud-spanner=true in profile "addons-429840"
	I1208 00:13:50.429718  792815 addons.go:70] Setting storage-provisioner=true in profile "addons-429840"
	I1208 00:13:50.429739  792815 addons.go:239] Setting addon storage-provisioner=true in "addons-429840"
	I1208 00:13:50.429740  792815 addons.go:239] Setting addon cloud-spanner=true in "addons-429840"
	I1208 00:13:50.429773  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.429780  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.430276  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.430354  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429700  792815 addons.go:70] Setting registry=true in profile "addons-429840"
	I1208 00:13:50.434581  792815 addons.go:239] Setting addon registry=true in "addons-429840"
	I1208 00:13:50.434640  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.436009  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.436256  792815 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-429840"
	I1208 00:13:50.436282  792815 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-429840"
	I1208 00:13:50.436563  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.429711  792815 addons.go:70] Setting registry-creds=true in profile "addons-429840"
	I1208 00:13:50.449272  792815 addons.go:239] Setting addon registry-creds=true in "addons-429840"
	I1208 00:13:50.449323  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.449818  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.454972  792815 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-429840"
	I1208 00:13:50.455049  792815 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-429840"
	I1208 00:13:50.455080  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.455560  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.462955  792815 addons.go:70] Setting volcano=true in profile "addons-429840"
	I1208 00:13:50.463009  792815 addons.go:239] Setting addon volcano=true in "addons-429840"
	I1208 00:13:50.463050  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.463602  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.485170  792815 addons.go:70] Setting volumesnapshots=true in profile "addons-429840"
	I1208 00:13:50.485368  792815 addons.go:239] Setting addon volumesnapshots=true in "addons-429840"
	I1208 00:13:50.485512  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.486069  792815 addons.go:70] Setting default-storageclass=true in profile "addons-429840"
	I1208 00:13:50.486247  792815 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-429840"
	I1208 00:13:50.487732  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.488117  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.496771  792815 out.go:179] * Verifying Kubernetes components...
	I1208 00:13:50.507009  792815 addons.go:70] Setting gcp-auth=true in profile "addons-429840"
	I1208 00:13:50.508822  792815 mustload.go:66] Loading cluster: addons-429840
	I1208 00:13:50.509155  792815 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:13:50.517792  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.524857  792815 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1208 00:13:50.530400  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1208 00:13:50.530434  792815 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1208 00:13:50.530507  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.543338  792815 addons.go:70] Setting ingress=true in profile "addons-429840"
	I1208 00:13:50.543418  792815 addons.go:239] Setting addon ingress=true in "addons-429840"
	I1208 00:13:50.543495  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.544031  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.560747  792815 addons.go:70] Setting ingress-dns=true in profile "addons-429840"
	I1208 00:13:50.560800  792815 addons.go:239] Setting addon ingress-dns=true in "addons-429840"
	I1208 00:13:50.560848  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.561365  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.571509  792815 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:13:50.575688  792815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:13:50.596508  792815 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1208 00:13:50.597218  792815 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:13:50.597238  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:13:50.597315  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.598321  792815 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1208 00:13:50.628877  792815 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1208 00:13:50.628901  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 00:13:50.628964  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.644793  792815 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1208 00:13:50.598696  792815 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1208 00:13:50.645310  792815 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1208 00:13:50.645325  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1208 00:13:50.599058  792815 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1208 00:13:50.647160  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.647176  792815 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-429840"
	I1208 00:13:50.647219  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.647651  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.657263  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 00:13:50.657283  792815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 00:13:50.657447  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.674910  792815 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 00:13:50.674937  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1208 00:13:50.675002  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.682622  792815 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1208 00:13:50.685752  792815 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 00:13:50.685783  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 00:13:50.685861  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.709876  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 00:13:50.713252  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 00:13:50.719009  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 00:13:50.721951  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 00:13:50.723755  792815 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1208 00:13:50.746816  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 00:13:50.754774  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 00:13:50.758511  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 00:13:50.761041  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1208 00:13:50.761892  792815 addons.go:239] Setting addon default-storageclass=true in "addons-429840"
	I1208 00:13:50.761960  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.762462  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:50.783172  792815 out.go:179]   - Using image docker.io/registry:3.0.0
	I1208 00:13:50.791686  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 00:13:50.791715  792815 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 00:13:50.791829  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.806807  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.808523  792815 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 00:13:50.808551  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1208 00:13:50.808616  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.815359  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:50.822942  792815 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 00:13:50.823133  792815 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1208 00:13:50.823205  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1208 00:13:50.823241  792815 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1208 00:13:50.830702  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 00:13:50.830731  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 00:13:50.830818  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.831219  792815 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 00:13:50.831233  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1208 00:13:50.831275  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.859986  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:50.864886  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:50.867842  792815 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 00:13:50.867868  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1208 00:13:50.867947  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.882582  792815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 00:13:50.894385  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.919900  792815 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 00:13:50.919995  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1208 00:13:50.920113  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.927755  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.929027  792815 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 00:13:50.932920  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.939264  792815 out.go:179]   - Using image docker.io/busybox:stable
	I1208 00:13:50.943277  792815 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 00:13:50.943351  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 00:13:50.943459  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:50.962250  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.963219  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:50.963729  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.019358  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.041906  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.043968  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.051006  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.055412  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	W1208 00:13:51.065627  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.065735  792815 retry.go:31] will retry after 291.17741ms: ssh: handshake failed: EOF
	I1208 00:13:51.076396  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	W1208 00:13:51.086769  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.086911  792815 retry.go:31] will retry after 171.704284ms: ssh: handshake failed: EOF
	I1208 00:13:51.090809  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.091598  792815 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:13:51.091616  792815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:13:51.091674  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:51.128850  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:51.167340  792815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1208 00:13:51.259756  792815 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 00:13:51.259832  792815 retry.go:31] will retry after 516.365027ms: ssh: handshake failed: EOF
	I1208 00:13:51.405649  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1208 00:13:51.405676  792815 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1208 00:13:51.560946  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1208 00:13:51.560975  792815 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1208 00:13:51.735360  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1208 00:13:51.735388  792815 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1208 00:13:51.756323  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1208 00:13:51.773996  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:13:51.776622  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 00:13:51.796567  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 00:13:51.805498  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 00:13:51.805530  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 00:13:51.835852  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 00:13:51.873907  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 00:13:51.913924  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 00:13:51.913953  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 00:13:51.925720  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:13:51.938079  792815 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1208 00:13:51.938120  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1208 00:13:51.956677  792815 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 00:13:51.956704  792815 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 00:13:52.045885  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 00:13:52.060439  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 00:13:52.060466  792815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 00:13:52.108497  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 00:13:52.108524  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 00:13:52.132649  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 00:13:52.138420  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1208 00:13:52.265384  792815 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 00:13:52.265409  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 00:13:52.268902  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 00:13:52.268927  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 00:13:52.334484  792815 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 00:13:52.334525  792815 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 00:13:52.335419  792815 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 00:13:52.335441  792815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 00:13:52.455860  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 00:13:52.503529  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 00:13:52.503569  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 00:13:52.525637  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 00:13:52.533616  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 00:13:52.625152  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 00:13:52.625183  792815 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 00:13:52.796254  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 00:13:52.796293  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 00:13:52.798571  792815 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:52.798593  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 00:13:52.976201  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 00:13:52.976229  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 00:13:52.996675  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:53.241612  792815 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 00:13:53.241638  792815 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 00:13:53.439710  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 00:13:53.439735  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 00:13:53.602661  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 00:13:53.602688  792815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 00:13:53.679751  792815 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.51237932s)
	I1208 00:13:53.679830  792815 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.797222465s)
	I1208 00:13:53.679941  792815 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1208 00:13:53.681093  792815 node_ready.go:35] waiting up to 6m0s for node "addons-429840" to be "Ready" ...
	I1208 00:13:53.850787  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 00:13:53.850807  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 00:13:54.169148  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 00:13:54.169213  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 00:13:54.259865  792815 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-429840" context rescaled to 1 replicas
	I1208 00:13:54.383782  792815 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 00:13:54.383810  792815 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 00:13:54.637912  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1208 00:13:55.712083  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:13:56.091894  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.317860878s)
	I1208 00:13:56.092168  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.335817348s)
	I1208 00:13:56.753784  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.917898923s)
	I1208 00:13:56.753842  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.879913806s)
	I1208 00:13:56.753896  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.828155121s)
	I1208 00:13:56.753923  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.708010355s)
	I1208 00:13:56.753980  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.6213067s)
	I1208 00:13:56.754200  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.615733022s)
	I1208 00:13:56.754375  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.957132127s)
	I1208 00:13:56.754461  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.298567982s)
	I1208 00:13:56.754475  792815 addons.go:495] Verifying addon registry=true in "addons-429840"
	I1208 00:13:56.754531  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.228859525s)
	I1208 00:13:56.754553  792815 addons.go:495] Verifying addon metrics-server=true in "addons-429840"
	I1208 00:13:56.754592  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.220935963s)
	I1208 00:13:56.754651  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.978004648s)
	I1208 00:13:56.754658  792815 addons.go:495] Verifying addon ingress=true in "addons-429840"
	I1208 00:13:56.757509  792815 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-429840 service yakd-dashboard -n yakd-dashboard
	
	I1208 00:13:56.759572  792815 out.go:179] * Verifying ingress addon...
	I1208 00:13:56.759609  792815 out.go:179] * Verifying registry addon...
	I1208 00:13:56.764180  792815 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 00:13:56.764180  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 00:13:56.810652  792815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 00:13:56.810682  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:56.811618  792815 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 00:13:56.811645  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:56.834797  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.838077546s)
	W1208 00:13:56.834834  792815 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 00:13:56.834869  792815 retry.go:31] will retry after 357.294897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 00:13:57.192389  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 00:13:57.272925  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:57.273154  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:57.317952  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.679990536s)
	I1208 00:13:57.317982  792815 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-429840"
	I1208 00:13:57.321167  792815 out.go:179] * Verifying csi-hostpath-driver addon...
	I1208 00:13:57.324676  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 00:13:57.373572  792815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 00:13:57.373641  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:57.768230  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:57.768561  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:57.868682  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:13:58.184830  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:13:58.268158  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:58.268303  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:58.328187  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:58.429320  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 00:13:58.429467  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:58.446707  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:58.576166  792815 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 00:13:58.589648  792815 addons.go:239] Setting addon gcp-auth=true in "addons-429840"
	I1208 00:13:58.589697  792815 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:13:58.590189  792815 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:13:58.609064  792815 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 00:13:58.609118  792815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:13:58.625827  792815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:13:58.768532  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:58.768815  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:58.828548  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.268183  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:59.268446  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:59.328354  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.769159  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:13:59.769880  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:13:59.827640  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:13:59.939956  792815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.747479152s)
	I1208 00:13:59.940066  792815 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.330968377s)
	I1208 00:13:59.943373  792815 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 00:13:59.946198  792815 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1208 00:13:59.949146  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 00:13:59.949175  792815 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 00:13:59.962836  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 00:13:59.962973  792815 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 00:13:59.975970  792815 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 00:13:59.975993  792815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1208 00:13:59.989283  792815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1208 00:14:00.203853  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:00.275411  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:00.275668  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:00.335678  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:00.701788  792815 addons.go:495] Verifying addon gcp-auth=true in "addons-429840"
	I1208 00:14:00.704935  792815 out.go:179] * Verifying gcp-auth addon...
	I1208 00:14:00.708596  792815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 00:14:00.715345  792815 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 00:14:00.715370  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:00.768259  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:00.768328  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:00.828460  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:01.212161  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:01.267650  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:01.268031  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:01.328202  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:01.717422  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:01.767884  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:01.769243  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:01.827881  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:02.212256  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:02.267653  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:02.268028  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:02.328266  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:02.683946  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:02.711787  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:02.768107  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:02.768788  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:02.827998  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:03.212301  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:03.267377  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:03.267774  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:03.328043  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:03.712256  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:03.767503  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:03.768096  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:03.828158  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:04.212347  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:04.268134  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:04.268336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:04.328248  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:04.712494  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:04.767684  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:04.767760  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:04.827872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:05.184450  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:05.212495  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:05.267679  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:05.268076  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:05.328378  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:05.712424  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:05.767834  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:05.767982  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:05.827842  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:06.212137  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:06.268247  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:06.268384  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:06.327945  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:06.712243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:06.768175  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:06.768234  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:06.828329  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:07.211872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:07.267855  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:07.267975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:07.328874  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:07.683777  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:07.711734  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:07.768112  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:07.768397  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:07.828308  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:08.212450  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:08.267683  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:08.267838  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:08.327680  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:08.712350  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:08.767321  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:08.767622  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:08.827550  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:09.211288  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:09.269747  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:09.270244  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:09.328078  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:09.684152  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:09.712213  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:09.767570  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:09.767637  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:09.828325  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:10.211989  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:10.268134  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:10.268704  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:10.327832  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:10.711973  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:10.767962  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:10.768194  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:10.828179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:11.212094  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:11.267874  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:11.268056  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:11.327553  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:11.684494  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:11.712875  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:11.767649  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:11.768298  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:11.828232  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:12.212323  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:12.267765  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:12.267951  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:12.328618  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:12.711796  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:12.767816  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:12.768128  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:12.827863  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:13.212171  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:13.267258  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:13.267544  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:13.328457  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:13.712938  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:13.767978  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:13.768243  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:13.827999  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:14.183813  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:14.211902  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:14.267861  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:14.268240  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:14.327735  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:14.711492  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:14.767842  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:14.767975  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:14.827629  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:15.211771  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:15.267668  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:15.268146  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:15.328044  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:15.712646  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:15.767596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:15.767728  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:15.827540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:16.184470  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:16.212575  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:16.267637  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:16.267699  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:16.327522  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:16.712141  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:16.768128  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:16.768296  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:16.827972  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:17.211661  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:17.267785  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:17.267887  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:17.327705  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:17.712434  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:17.767246  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:17.767478  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:17.828265  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:18.213074  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:18.268345  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:18.268461  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:18.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:18.684423  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:18.712534  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:18.767506  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:18.767721  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:18.828311  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:19.211427  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:19.267373  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:19.267522  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:19.328359  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:19.712707  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:19.767565  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:19.767889  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:19.827599  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:20.212076  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:20.268533  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:20.269050  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:20.327975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:20.712418  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:20.767947  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:20.768082  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:20.827646  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:21.184350  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:21.212319  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:21.267450  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:21.267696  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:21.328851  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:21.711936  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:21.767891  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:21.768244  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:21.827795  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:22.211750  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:22.268206  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:22.268441  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:22.328188  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:22.711850  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:22.768109  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:22.768214  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:22.828474  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:23.184585  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:23.211324  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:23.268697  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:23.269253  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:23.327970  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:23.711393  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:23.767387  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:23.767489  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:23.828127  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:24.211745  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:24.267840  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:24.267916  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:24.327931  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:24.711861  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:24.768188  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:24.768317  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:24.827454  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:25.186461  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:25.216494  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:25.267686  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:25.267699  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:25.327482  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:25.712461  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:25.767359  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:25.767516  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:25.828439  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:26.212169  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:26.267401  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:26.267846  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:26.327775  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:26.711895  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:26.768946  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:26.769703  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:26.827522  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:27.211605  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:27.267684  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:27.267932  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:27.328001  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:27.683747  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:27.711694  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:27.768129  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:27.768289  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:27.827761  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:28.211980  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:28.268130  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:28.268590  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:28.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:28.711526  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:28.767780  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:28.767813  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:28.828344  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:29.212111  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:29.268193  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:29.268817  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:29.327655  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:29.684730  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:29.711239  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:29.767083  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:29.767160  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:29.827526  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:30.212495  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:30.267697  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:30.267865  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:30.327780  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:30.712019  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:30.768635  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:30.768767  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:30.828632  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:31.211470  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:31.267652  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:31.268027  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:31.327542  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1208 00:14:31.684798  792815 node_ready.go:57] node "addons-429840" has "Ready":"False" status (will retry)
	I1208 00:14:31.711596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:31.767540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:31.767688  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:31.828322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:32.212357  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:32.267698  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:32.267769  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:32.327574  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:32.685704  792815 node_ready.go:49] node "addons-429840" is "Ready"
	I1208 00:14:32.685740  792815 node_ready.go:38] duration metric: took 39.004623693s for node "addons-429840" to be "Ready" ...
	I1208 00:14:32.685756  792815 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:14:32.685818  792815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:14:32.704587  792815 api_server.go:72] duration metric: took 42.295583668s to wait for apiserver process to appear ...
	I1208 00:14:32.704615  792815 api_server.go:88] waiting for apiserver healthz status ...
	I1208 00:14:32.704633  792815 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1208 00:14:32.712966  792815 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1208 00:14:32.714584  792815 api_server.go:141] control plane version: v1.34.2
	I1208 00:14:32.714617  792815 api_server.go:131] duration metric: took 9.995632ms to wait for apiserver health ...
	I1208 00:14:32.714627  792815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 00:14:32.720706  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:32.728084  792815 system_pods.go:59] 19 kube-system pods found
	I1208 00:14:32.728123  792815 system_pods.go:61] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending
	I1208 00:14:32.728131  792815 system_pods.go:61] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:32.728135  792815 system_pods.go:61] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:32.728139  792815 system_pods.go:61] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:32.728142  792815 system_pods.go:61] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:32.728146  792815 system_pods.go:61] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:32.728150  792815 system_pods.go:61] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:32.728154  792815 system_pods.go:61] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:32.728158  792815 system_pods.go:61] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:32.728163  792815 system_pods.go:61] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:32.728171  792815 system_pods.go:61] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:32.728178  792815 system_pods.go:61] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:32.728196  792815 system_pods.go:61] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:32.728203  792815 system_pods.go:61] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending
	I1208 00:14:32.728210  792815 system_pods.go:61] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:32.728220  792815 system_pods.go:61] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:32.728224  792815 system_pods.go:61] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending
	I1208 00:14:32.728229  792815 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending
	I1208 00:14:32.728233  792815 system_pods.go:61] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:32.728239  792815 system_pods.go:74] duration metric: took 13.605827ms to wait for pod list to return data ...
	I1208 00:14:32.728256  792815 default_sa.go:34] waiting for default service account to be created ...
	I1208 00:14:32.733614  792815 default_sa.go:45] found service account: "default"
	I1208 00:14:32.733647  792815 default_sa.go:55] duration metric: took 5.378378ms for default service account to be created ...
	I1208 00:14:32.733657  792815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 00:14:32.740546  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:32.740590  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending
	I1208 00:14:32.740597  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:32.740602  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:32.740609  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:32.740614  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:32.740620  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:32.740624  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:32.740629  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:32.740634  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:32.740638  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:32.740644  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:32.740653  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:32.740670  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:32.740685  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:32.740694  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:32.740702  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:32.740706  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending
	I1208 00:14:32.740711  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending
	I1208 00:14:32.740715  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:32.740729  792815 retry.go:31] will retry after 271.192312ms: missing components: kube-dns
	I1208 00:14:32.848377  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:32.883302  792815 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 00:14:32.883322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:32.883593  792815 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 00:14:32.883608  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:33.017280  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.017329  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.017337  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:33.017344  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending
	I1208 00:14:33.017348  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending
	I1208 00:14:33.017352  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.017356  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.017362  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.017366  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.017379  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:33.017389  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.017393  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.017399  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.017409  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending
	I1208 00:14:33.017415  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.017421  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.017429  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending
	I1208 00:14:33.017437  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.017451  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.017459  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending
	I1208 00:14:33.017476  792815 retry.go:31] will retry after 291.352747ms: missing components: kube-dns
	I1208 00:14:33.236343  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:33.319349  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.319383  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.319392  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending
	I1208 00:14:33.319408  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:33.319415  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:33.319423  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.319429  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.319439  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.319443  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.319448  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending
	I1208 00:14:33.319459  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.319463  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.319469  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.319488  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:33.319495  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.319502  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.319511  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:33.319517  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.319524  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.319536  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 00:14:33.319551  792815 retry.go:31] will retry after 378.336421ms: missing components: kube-dns
	I1208 00:14:33.321072  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:33.325250  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:33.334764  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:33.702512  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:33.702593  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 00:14:33.702618  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 00:14:33.702640  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:33.702678  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:33.702696  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:33.702715  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:33.702734  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:33.702761  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:33.702785  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 00:14:33.702803  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:33.702820  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:33.702873  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:33.702898  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:33.702921  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:33.702941  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:33.702971  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:33.702992  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.703023  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:33.703042  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 00:14:33.703082  792815 retry.go:31] will retry after 375.454237ms: missing components: kube-dns
	I1208 00:14:33.713358  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:33.768010  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:33.769069  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:33.828151  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:34.088866  792815 system_pods.go:86] 19 kube-system pods found
	I1208 00:14:34.088952  792815 system_pods.go:89] "coredns-66bc5c9577-vjrlp" [d78b2648-3dfe-49d2-a2b3-583b36f74c72] Running
	I1208 00:14:34.088978  792815 system_pods.go:89] "csi-hostpath-attacher-0" [81c49438-a1a8-43fc-8278-e85cc2c28dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1208 00:14:34.089017  792815 system_pods.go:89] "csi-hostpath-resizer-0" [51df8ac9-e52f-449e-b8b1-48143b23181b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1208 00:14:34.089048  792815 system_pods.go:89] "csi-hostpathplugin-q66vl" [ae5bf1e4-63ae-4010-9c1d-3452f1191d24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1208 00:14:34.089066  792815 system_pods.go:89] "etcd-addons-429840" [3698218c-d748-4d1e-848b-8e7744d93ad6] Running
	I1208 00:14:34.089087  792815 system_pods.go:89] "kindnet-zcvnv" [a84fad01-9118-47ef-84df-43c80ec29b1b] Running
	I1208 00:14:34.089105  792815 system_pods.go:89] "kube-apiserver-addons-429840" [e84a8c29-d570-4be1-8739-4baf50c80faf] Running
	I1208 00:14:34.089133  792815 system_pods.go:89] "kube-controller-manager-addons-429840" [a821f32a-44fc-4ed8-b87b-ebf8dcd37dfe] Running
	I1208 00:14:34.089160  792815 system_pods.go:89] "kube-ingress-dns-minikube" [d08476a4-f1d1-4e91-8da8-5ee53f55b043] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1208 00:14:34.089180  792815 system_pods.go:89] "kube-proxy-29dtj" [4b9353c8-e6c4-4a8a-a04b-83a9cc2b10e7] Running
	I1208 00:14:34.089198  792815 system_pods.go:89] "kube-scheduler-addons-429840" [5a4b58ae-ba80-46a7-8530-2316c0db8364] Running
	I1208 00:14:34.089218  792815 system_pods.go:89] "metrics-server-85b7d694d7-9z5hq" [c36877f7-0444-4eb9-841b-5976a8caca66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1208 00:14:34.089248  792815 system_pods.go:89] "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 00:14:34.089273  792815 system_pods.go:89] "registry-6b586f9694-p77p6" [e8f0f21f-b385-4326-9bda-98db82c6253d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1208 00:14:34.089299  792815 system_pods.go:89] "registry-creds-764b6fb674-2h5gp" [314d2c2e-10b3-42ca-9055-ee48f7ce3891] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 00:14:34.089319  792815 system_pods.go:89] "registry-proxy-9vjr9" [25e88a12-651c-48d8-97b2-943f790425e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1208 00:14:34.089351  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-675j4" [23d9320d-93f3-4595-bb82-d5691c7d9add] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:34.089375  792815 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rh7x7" [77d93b33-12ea-4959-8fd5-4931534c09af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1208 00:14:34.089396  792815 system_pods.go:89] "storage-provisioner" [f6386f97-9db4-4b7e-ad1c-d268cfba68b2] Running
	I1208 00:14:34.089418  792815 system_pods.go:126] duration metric: took 1.355754761s to wait for k8s-apps to be running ...
	I1208 00:14:34.089448  792815 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 00:14:34.089524  792815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:14:34.103729  792815 system_svc.go:56] duration metric: took 14.282716ms WaitForService to wait for kubelet
	I1208 00:14:34.103799  792815 kubeadm.go:587] duration metric: took 43.694799911s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:14:34.103832  792815 node_conditions.go:102] verifying NodePressure condition ...
	I1208 00:14:34.107096  792815 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 00:14:34.107169  792815 node_conditions.go:123] node cpu capacity is 2
	I1208 00:14:34.107199  792815 node_conditions.go:105] duration metric: took 3.348129ms to run NodePressure ...
	I1208 00:14:34.107223  792815 start.go:242] waiting for startup goroutines ...
	I1208 00:14:34.212139  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:34.268582  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:34.268921  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:34.328953  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:34.712996  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:34.769381  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:34.769802  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:34.827963  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:35.212735  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:35.312894  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:35.313395  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:35.332473  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:35.711872  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:35.769118  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:35.769296  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:35.828456  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:36.213282  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:36.270456  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:36.270965  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:36.335696  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:36.713364  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:36.770026  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:36.770350  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:36.829966  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:37.212191  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:37.268197  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:37.269581  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:37.330014  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:37.713179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:37.769982  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:37.770400  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:37.828550  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:38.211463  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:38.267832  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:38.268336  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:38.328736  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:38.711484  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:38.769588  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:38.770523  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:38.829567  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:39.212554  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:39.269782  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:39.270260  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:39.328981  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:39.712698  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:39.769792  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:39.770315  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:39.828855  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:40.212063  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:40.267983  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:40.268019  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:40.329744  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:40.712979  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:40.769954  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:40.770646  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:40.827907  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:41.212501  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:41.269512  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:41.269848  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:41.328606  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:41.712544  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:41.768906  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:41.769059  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:41.827998  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:42.212585  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:42.269283  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:42.269587  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:42.328431  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:42.711607  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:42.768082  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:42.768427  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:42.828843  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:43.212865  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:43.270182  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:43.270591  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:43.329642  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:43.711868  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:43.770067  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:43.770428  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:43.832078  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:44.213127  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:44.269187  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:44.269287  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:44.328243  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:44.712084  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:44.769534  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:44.769749  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:44.827702  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:45.213899  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:45.269794  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:45.270431  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:45.328482  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:45.711907  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:45.768825  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:45.770157  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:45.827852  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:46.212294  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:46.268569  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:46.268809  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:46.329703  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:46.711895  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:46.769297  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:46.770104  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:46.828713  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:47.212459  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:47.268906  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:47.269149  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:47.328819  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:47.712223  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:47.768324  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:47.768443  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:47.829047  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:48.211551  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:48.268643  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:48.268791  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:48.329216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:48.711642  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:48.768552  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:48.768688  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:48.828017  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:49.211946  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:49.268450  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:49.268572  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:49.369173  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:49.712824  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:49.768042  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:49.768218  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:49.828389  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:50.211999  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:50.268645  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:50.269189  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:50.328153  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:50.713049  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:50.820409  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:50.820975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:50.831381  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:51.212684  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:51.268718  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:51.268939  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:51.328242  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:51.712863  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:51.768303  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:51.768954  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:51.828506  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:52.211916  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:52.268656  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:52.269425  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:52.329375  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:52.711797  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:52.772576  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:52.773182  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:52.831277  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:53.213067  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:53.313919  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:53.314282  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:53.328985  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:53.712761  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:53.769194  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:53.769368  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:53.828890  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:54.227760  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:54.328531  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:54.328897  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:54.334119  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:54.715785  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:54.816603  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:54.816838  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:54.829528  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:55.212322  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:55.267707  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:55.278301  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:55.331102  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:55.713710  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:55.767880  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:55.768072  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:55.828225  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:56.218919  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:56.323491  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:56.324602  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:56.329773  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:56.712378  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:56.769315  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:56.769606  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:56.828932  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:57.214016  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:57.268462  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:57.269087  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:57.329236  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:57.713062  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:57.769995  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:57.770325  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:57.828443  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:58.212258  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:58.268329  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:58.273077  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:58.328791  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:58.712972  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:58.769736  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:58.770436  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:58.829179  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:59.212003  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:59.269893  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:59.270021  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:59.328336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:14:59.712536  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:14:59.768793  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:14:59.768936  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:14:59.828319  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:00.305980  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:00.306324  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:00.306779  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:00.425621  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:00.713645  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:00.771299  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:00.771485  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:00.831594  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:01.211981  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:01.268867  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:01.269237  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:01.328821  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:01.712379  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:01.767797  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:01.767937  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:01.828216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:02.212278  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:02.269669  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:02.270043  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:02.328757  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:02.712585  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:02.813577  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:02.813984  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:02.913948  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:03.212609  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:03.269449  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:03.269541  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:03.329452  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:03.712023  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:03.769510  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:03.769663  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:03.827822  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:04.212483  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:04.272230  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:04.272591  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:04.329271  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:04.711984  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:04.813331  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:04.813446  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:04.828637  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:05.212910  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:05.280892  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:05.281328  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:05.329672  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:05.711948  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:05.768711  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:05.768898  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:05.828491  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:06.211889  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:06.269269  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:06.269751  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:06.328271  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:06.711503  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:06.769602  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:06.769760  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:06.828113  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:07.212723  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:07.269856  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:07.270074  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:07.328058  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:07.712362  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:07.768941  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:07.769920  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:07.828758  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:08.212216  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:08.268317  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:08.268504  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:08.328469  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:08.711519  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:08.776373  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:08.776563  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:08.875811  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:09.212107  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:09.268834  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:09.268975  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:09.336383  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:09.712280  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:09.767500  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:09.767831  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:09.827851  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:10.212595  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:10.268348  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:10.269063  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:10.328900  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:10.712417  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:10.768834  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:10.768997  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:10.828744  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:11.211644  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:11.269648  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:11.269788  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:11.328056  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:11.712728  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:11.768304  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:11.768441  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:11.828596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:12.212499  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:12.268483  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 00:15:12.269842  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:12.328128  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:12.712076  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:12.769542  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:12.769702  792815 kapi.go:107] duration metric: took 1m16.005525778s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 00:15:12.828289  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:13.212580  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:13.313741  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:13.335219  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:13.712194  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:13.769081  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:13.829421  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:14.211836  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:14.269299  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:14.329745  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:14.712563  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:14.767395  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:14.828347  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:15.212014  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:15.268319  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:15.328107  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:15.712226  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:15.767219  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:15.828316  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:16.211596  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:16.267297  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:16.328186  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:16.712027  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:16.768486  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:16.828477  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:17.212015  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:17.269836  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:17.337273  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:17.712490  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:17.768301  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:17.829309  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:18.212318  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:18.268032  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:18.329594  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:18.712531  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:18.767622  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:18.827818  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:19.211786  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:19.268205  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:19.328715  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:19.720180  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:19.815334  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:19.916584  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:20.213137  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 00:15:20.313262  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:20.329013  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:20.712195  792815 kapi.go:107] duration metric: took 1m20.003600343s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 00:15:20.715451  792815 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-429840 cluster.
	I1208 00:15:20.718449  792815 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 00:15:20.721393  792815 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 00:15:20.767437  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:20.828539  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:21.268066  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:21.328288  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:21.768460  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:21.829988  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:22.268268  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:22.328963  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:22.768369  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:22.829540  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:23.268134  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:23.328336  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:23.767820  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:23.828126  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:24.267190  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:24.328300  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:24.768447  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:24.828721  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:25.268429  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:25.328356  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:25.769443  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:25.829063  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:26.267340  792815 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 00:15:26.332265  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:26.767762  792815 kapi.go:107] duration metric: took 1m30.003580281s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 00:15:26.828145  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:27.328100  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:27.837360  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:28.329213  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:28.829900  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:29.329154  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:29.829013  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:30.328395  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:30.828580  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:31.327814  792815 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 00:15:31.828925  792815 kapi.go:107] duration metric: took 1m34.504244516s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 00:15:31.832104  792815 out.go:179] * Enabled addons: inspektor-gadget, default-storageclass, ingress-dns, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1208 00:15:31.834929  792815 addons.go:530] duration metric: took 1m41.425531543s for enable addons: enabled=[inspektor-gadget default-storageclass ingress-dns amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1208 00:15:31.834990  792815 start.go:247] waiting for cluster config update ...
	I1208 00:15:31.835017  792815 start.go:256] writing updated cluster config ...
	I1208 00:15:31.835320  792815 ssh_runner.go:195] Run: rm -f paused
	I1208 00:15:31.840010  792815 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 00:15:31.843388  792815 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vjrlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.848866  792815 pod_ready.go:94] pod "coredns-66bc5c9577-vjrlp" is "Ready"
	I1208 00:15:31.848894  792815 pod_ready.go:86] duration metric: took 5.475109ms for pod "coredns-66bc5c9577-vjrlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.851566  792815 pod_ready.go:83] waiting for pod "etcd-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.856549  792815 pod_ready.go:94] pod "etcd-addons-429840" is "Ready"
	I1208 00:15:31.856589  792815 pod_ready.go:86] duration metric: took 4.9964ms for pod "etcd-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.859007  792815 pod_ready.go:83] waiting for pod "kube-apiserver-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.863615  792815 pod_ready.go:94] pod "kube-apiserver-addons-429840" is "Ready"
	I1208 00:15:31.863648  792815 pod_ready.go:86] duration metric: took 4.612315ms for pod "kube-apiserver-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:31.865919  792815 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.244288  792815 pod_ready.go:94] pod "kube-controller-manager-addons-429840" is "Ready"
	I1208 00:15:32.244316  792815 pod_ready.go:86] duration metric: took 378.366929ms for pod "kube-controller-manager-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.444601  792815 pod_ready.go:83] waiting for pod "kube-proxy-29dtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:32.844828  792815 pod_ready.go:94] pod "kube-proxy-29dtj" is "Ready"
	I1208 00:15:32.844857  792815 pod_ready.go:86] duration metric: took 400.228555ms for pod "kube-proxy-29dtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.044273  792815 pod_ready.go:83] waiting for pod "kube-scheduler-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.444228  792815 pod_ready.go:94] pod "kube-scheduler-addons-429840" is "Ready"
	I1208 00:15:33.444255  792815 pod_ready.go:86] duration metric: took 399.956904ms for pod "kube-scheduler-addons-429840" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 00:15:33.444269  792815 pod_ready.go:40] duration metric: took 1.604224653s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 00:15:33.507809  792815 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 00:15:33.510815  792815 out.go:179] * Done! kubectl is now configured to use "addons-429840" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.576599418Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443 UID:afc3fed4-8cf3-419c-98aa-f797fd69ab0e NetNS:/var/run/netns/41bacb47-1898-4b12-a05e-aa41ddd42f6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d548}] Aliases:map[]}"
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.576819096Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.581206464Z" level=info msg="Ran pod sandbox 6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443 with infra container: default/busybox/POD" id=fe499561-076c-4452-8818-20c0c1440d1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.583593722Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=539908b6-0b60-42d8-9e25-b42819e0d639 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.583773285Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=539908b6-0b60-42d8-9e25-b42819e0d639 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.583821064Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=539908b6-0b60-42d8-9e25-b42819e0d639 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.584618184Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c61784f5-2ef9-4a77-984f-604b6c79c614 name=/runtime.v1.ImageService/PullImage
	Dec 08 00:15:34 addons-429840 crio[831]: time="2025-12-08T00:15:34.586132251Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.686175261Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c61784f5-2ef9-4a77-984f-604b6c79c614 name=/runtime.v1.ImageService/PullImage
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.686688686Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87a53e5b-76ba-409c-8809-2724514acbab name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.690026034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b0e297c-7a4b-490d-a7f5-201cad9a1dc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.696354559Z" level=info msg="Creating container: default/busybox/busybox" id=170b314f-347f-498e-99a0-e6930191780c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.696534171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.703105767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.703740194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.721954208Z" level=info msg="Created container 2b8611578b35a3d437403d1f0fec38b8d62beecb42320f77d9cb00d1f3218a66: default/busybox/busybox" id=170b314f-347f-498e-99a0-e6930191780c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.723253241Z" level=info msg="Starting container: 2b8611578b35a3d437403d1f0fec38b8d62beecb42320f77d9cb00d1f3218a66" id=1f737f5d-f947-4de9-b8d2-fd8632ac2c1b name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 00:15:36 addons-429840 crio[831]: time="2025-12-08T00:15:36.725204203Z" level=info msg="Started container" PID=4953 containerID=2b8611578b35a3d437403d1f0fec38b8d62beecb42320f77d9cb00d1f3218a66 description=default/busybox/busybox id=1f737f5d-f947-4de9-b8d2-fd8632ac2c1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.62064847Z" level=info msg="Removing container: f2713b130da4f39092806d080fd9edd3d2351ad33f445928e95f41c6302cdf1b" id=7cb59144-8cd3-4c48-b39a-c3a97f1a7466 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.623290934Z" level=info msg="Error loading conmon cgroup of container f2713b130da4f39092806d080fd9edd3d2351ad33f445928e95f41c6302cdf1b: cgroup deleted" id=7cb59144-8cd3-4c48-b39a-c3a97f1a7466 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.627592722Z" level=info msg="Removed container f2713b130da4f39092806d080fd9edd3d2351ad33f445928e95f41c6302cdf1b: gcp-auth/gcp-auth-certs-create-6xp6z/create" id=7cb59144-8cd3-4c48-b39a-c3a97f1a7466 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.634290203Z" level=info msg="Stopping pod sandbox: 772c7504662dd5d94bc11b76a2024d13d39033e960af9593be51b7ec0a85c32a" id=2b96ade8-2d91-42ec-ae3b-4cf55a2a7126 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.634351209Z" level=info msg="Stopped pod sandbox (already stopped): 772c7504662dd5d94bc11b76a2024d13d39033e960af9593be51b7ec0a85c32a" id=2b96ade8-2d91-42ec-ae3b-4cf55a2a7126 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.634758466Z" level=info msg="Removing pod sandbox: 772c7504662dd5d94bc11b76a2024d13d39033e960af9593be51b7ec0a85c32a" id=d731cfd7-2b99-4045-87b5-eb7877455fe5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 08 00:15:45 addons-429840 crio[831]: time="2025-12-08T00:15:45.642063485Z" level=info msg="Removed pod sandbox: 772c7504662dd5d94bc11b76a2024d13d39033e960af9593be51b7ec0a85c32a" id=d731cfd7-2b99-4045-87b5-eb7877455fe5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	2b8611578b35a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   6a4afe108103e       busybox                                    default
	51022c4a75880       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	16ef54fec815f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	d7a58ce04c20d       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	9a5a7433c6610       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	8fecdf93ed323       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             20 seconds ago       Running             controller                               0                   ec64e0d6b6f8c       ingress-nginx-controller-6c8bf45fb-p78l4   ingress-nginx
	1d51443c465f7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 27 seconds ago       Running             gcp-auth                                 0                   a7e57ff0d712e       gcp-auth-78565c9fb4-sfdxb                  gcp-auth
	65668a2c2509b       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             27 seconds ago       Exited              patch                                    3                   a9a6d9e7d66fb       gcp-auth-certs-patch-mhc8w                 gcp-auth
	28fe644efa1f3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            30 seconds ago       Running             gadget                                   0                   57914ac67def8       gadget-c4kp7                               gadget
	22bde17100e41       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                33 seconds ago       Running             node-driver-registrar                    0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	56946171c705e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              34 seconds ago       Running             registry-proxy                           0                   0c594abfc8fc3       registry-proxy-9vjr9                       kube-system
	aec26cd72a02e       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     38 seconds ago       Running             nvidia-device-plugin-ctr                 0                   59babd64dc4b6       nvidia-device-plugin-daemonset-g6445       kube-system
	3fd9d7897b3fa       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              42 seconds ago       Running             csi-resizer                              0                   4f5736472b410       csi-hostpath-resizer-0                     kube-system
	8eec18b6c152f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             44 seconds ago       Running             csi-attacher                             0                   e40fe2b94ac6f       csi-hostpath-attacher-0                    kube-system
	ac1cbf091afea       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             44 seconds ago       Exited              patch                                    2                   5cf606d3736e0       ingress-nginx-admission-patch-qqch7        ingress-nginx
	88fac620cb5dd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              45 seconds ago       Running             yakd                                     0                   e7409507b9019       yakd-dashboard-5ff678cb9-2jf6z             yakd-dashboard
	2576d4f9f4d72       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   49 seconds ago       Exited              create                                   0                   ec34c9d53f908       ingress-nginx-admission-create-226t7       ingress-nginx
	eb7e7a7efc043       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           49 seconds ago       Running             registry                                 0                   ba32c0548dc1e       registry-6b586f9694-p77p6                  kube-system
	0695bc22a1299       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   51 seconds ago       Running             csi-external-health-monitor-controller   0                   cc9faad97b5e7       csi-hostpathplugin-q66vl                   kube-system
	777823a1b3e68       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      53 seconds ago       Running             volume-snapshot-controller               0                   665894036b3d5       snapshot-controller-7d9fbc56b8-rh7x7       kube-system
	220ce0d6bf3d5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        53 seconds ago       Running             metrics-server                           0                   24f800b14b818       metrics-server-85b7d694d7-9z5hq            kube-system
	9e848158b2cbc       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             56 seconds ago       Running             local-path-provisioner                   0                   376c1b482ebd3       local-path-provisioner-648f6765c9-8rr8f    local-path-storage
	2d6f8acedf212       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               57 seconds ago       Running             cloud-spanner-emulator                   0                   0a106f7566d81       cloud-spanner-emulator-5bdddb765-d4kr8     default
	91a9d71fa2558       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   ba8dbaaba1a2e       snapshot-controller-7d9fbc56b8-675j4       kube-system
	25f99dffaa8ed       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   9598a6cbe996e       kube-ingress-dns-minikube                  kube-system
	87d0a5e3d7fbb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   6abf234907a7b       coredns-66bc5c9577-vjrlp                   kube-system
	f877c300e548d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   462f4f11e5991       storage-provisioner                        kube-system
	49a9b28a64519       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   e3d98de48b4a7       kindnet-zcvnv                              kube-system
	1c7ec16efebcb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   bff42258c26fe       kube-proxy-29dtj                           kube-system
	8bf8d2ee6f616       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   1ca586a02bbe4       kube-apiserver-addons-429840               kube-system
	2fba6529a9c34       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   463e82dea41ce       kube-scheduler-addons-429840               kube-system
	01230e11e24c3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   4705986f530ca       etcd-addons-429840                         kube-system
	92f126df047be       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   cc5914885f51d       kube-controller-manager-addons-429840      kube-system
	
	
	==> coredns [87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960] <==
	[INFO] 10.244.0.18:47445 - 13843 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000108325s
	[INFO] 10.244.0.18:47445 - 21689 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002068115s
	[INFO] 10.244.0.18:47445 - 40452 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002630107s
	[INFO] 10.244.0.18:47445 - 18648 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000113134s
	[INFO] 10.244.0.18:47445 - 51105 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014497s
	[INFO] 10.244.0.18:48287 - 36809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160043s
	[INFO] 10.244.0.18:48287 - 36356 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071599s
	[INFO] 10.244.0.18:37946 - 25739 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095566s
	[INFO] 10.244.0.18:37946 - 25550 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081593s
	[INFO] 10.244.0.18:49717 - 9708 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077555s
	[INFO] 10.244.0.18:49717 - 9271 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182328s
	[INFO] 10.244.0.18:46203 - 3224 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002220044s
	[INFO] 10.244.0.18:46203 - 3397 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002472476s
	[INFO] 10.244.0.18:60437 - 46779 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140071s
	[INFO] 10.244.0.18:60437 - 46910 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132883s
	[INFO] 10.244.0.20:55567 - 61063 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000577901s
	[INFO] 10.244.0.20:59221 - 39027 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014282s
	[INFO] 10.244.0.20:35704 - 25337 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140555s
	[INFO] 10.244.0.20:45333 - 46018 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009962s
	[INFO] 10.244.0.20:56619 - 27765 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125178s
	[INFO] 10.244.0.20:48365 - 44039 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108276s
	[INFO] 10.244.0.20:43533 - 21698 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002100222s
	[INFO] 10.244.0.20:50229 - 28223 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002196987s
	[INFO] 10.244.0.20:33952 - 61830 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000898282s
	[INFO] 10.244.0.20:56611 - 6914 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002263515s
	
	
	==> describe nodes <==
	Name:               addons-429840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-429840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-429840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T00_13_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-429840
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-429840"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 00:13:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-429840
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 00:15:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 00:15:37 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 00:15:37 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 00:15:37 +0000   Mon, 08 Dec 2025 00:13:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 00:15:37 +0000   Mon, 08 Dec 2025 00:14:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-429840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                18ca2914-c576-4e62-b7ae-ff5b28fdea60
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-5bdddb765-d4kr8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-c4kp7                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-sfdxb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-p78l4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         111s
	  kube-system                 coredns-66bc5c9577-vjrlp                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpathplugin-q66vl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-429840                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-zcvnv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-addons-429840                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-addons-429840       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-29dtj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-addons-429840                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 metrics-server-85b7d694d7-9z5hq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         113s
	  kube-system                 nvidia-device-plugin-daemonset-g6445        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-p77p6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-creds-764b6fb674-2h5gp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-9vjr9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-675j4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 snapshot-controller-7d9fbc56b8-rh7x7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  local-path-storage          local-path-provisioner-648f6765c9-8rr8f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2jf6z              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 115s                 kube-proxy       
	  Normal   Starting                 2m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-429840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-429840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node addons-429840 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s                 kubelet          Node addons-429840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s                 kubelet          Node addons-429840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s                 kubelet          Node addons-429840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           118s                 node-controller  Node addons-429840 event: Registered Node addons-429840 in Controller
	  Normal   NodeReady                75s                  kubelet          Node addons-429840 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 7 23:23] overlayfs: idmapped layers are currently not supported
	[ +23.021914] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:24] overlayfs: idmapped layers are currently not supported
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933] <==
	{"level":"warn","ts":"2025-12-08T00:13:41.583725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.597588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.615867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.652908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.673077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.696673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.713896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.731560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.744252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.766015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.785222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.816267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.828493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.837750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.877786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.899232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:41.915670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:42.007877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:57.548924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:13:57.556210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.919995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.934897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.966298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T00:14:19.981269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T00:15:00.233623Z","caller":"traceutil/trace.go:172","msg":"trace[812187976] transaction","detail":"{read_only:false; response_revision:1114; number_of_response:1; }","duration":"160.401499ms","start":"2025-12-08T00:15:00.073198Z","end":"2025-12-08T00:15:00.233599Z","steps":["trace[812187976] 'process raft request'  (duration: 94.432134ms)","trace[812187976] 'compare'  (duration: 65.600032ms)"],"step_count":2}
	
	
	==> gcp-auth [1d51443c465f7b718875deabcaa99ef0a36bb503c3543d483dd8779bcb546f4b] <==
	2025/12/08 00:15:19 GCP Auth Webhook started!
	2025/12/08 00:15:33 Ready to marshal response ...
	2025/12/08 00:15:33 Ready to write response ...
	2025/12/08 00:15:34 Ready to marshal response ...
	2025/12/08 00:15:34 Ready to write response ...
	2025/12/08 00:15:34 Ready to marshal response ...
	2025/12/08 00:15:34 Ready to write response ...
	
	
	==> kernel <==
	 00:15:47 up  4:57,  0 user,  load average: 2.45, 1.72, 1.47
	Linux addons-429840 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1] <==
	I1208 00:13:52.238110       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 00:13:52.238381       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 00:14:22.237841       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 00:14:22.238966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 00:14:22.239010       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1208 00:14:22.239027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1208 00:14:23.738433       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 00:14:23.738465       1 metrics.go:72] Registering metrics
	I1208 00:14:23.738541       1 controller.go:711] "Syncing nftables rules"
	I1208 00:14:32.238125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:14:32.238182       1 main.go:301] handling current node
	I1208 00:14:42.242411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:14:42.242455       1 main.go:301] handling current node
	I1208 00:14:52.237423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:14:52.237458       1 main.go:301] handling current node
	I1208 00:15:02.237603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:15:02.237645       1 main.go:301] handling current node
	I1208 00:15:12.237093       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:15:12.237125       1 main.go:301] handling current node
	I1208 00:15:22.238114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:15:22.238200       1 main.go:301] handling current node
	I1208 00:15:32.238119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:15:32.238169       1 main.go:301] handling current node
	I1208 00:15:42.237144       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1208 00:15:42.237188       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f] <==
	E1208 00:14:32.512370       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.230.19:443: connect: connection refused" logger="UnhandledError"
	W1208 00:14:32.513953       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.230.19:443: connect: connection refused
	E1208 00:14:32.513993       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.230.19:443: connect: connection refused" logger="UnhandledError"
	W1208 00:14:32.612082       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.230.19:443: connect: connection refused
	E1208 00:14:32.612129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.230.19:443: connect: connection refused" logger="UnhandledError"
	W1208 00:14:55.781220       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:55.781268       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1208 00:14:55.781282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1208 00:14:55.782269       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:55.782364       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1208 00:14:55.782377       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1208 00:14:56.231467       1 handler_proxy.go:99] no RequestInfo found in the context
	E1208 00:14:56.231548       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1208 00:14:56.232849       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	E1208 00:14:56.234474       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	E1208 00:14:56.239003       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.161.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.161.127:443: connect: connection refused" logger="UnhandledError"
	I1208 00:14:56.376645       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 00:15:44.513032       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43994: use of closed network connection
	E1208 00:15:44.883022       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44044: use of closed network connection
	
	
	==> kube-controller-manager [92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685] <==
	I1208 00:13:49.948430       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1208 00:13:49.948702       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 00:13:49.948727       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 00:13:49.948745       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 00:13:49.948753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 00:13:49.948764       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 00:13:49.958476       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1208 00:13:49.958525       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1208 00:13:49.958544       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1208 00:13:49.958549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1208 00:13:49.958554       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 00:13:49.959784       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 00:13:49.969463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 00:13:49.985094       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-429840" podCIDRs=["10.244.0.0/24"]
	E1208 00:13:54.929154       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1208 00:14:19.912888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1208 00:14:19.913043       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1208 00:14:19.913085       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1208 00:14:19.954897       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1208 00:14:19.959125       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1208 00:14:20.013902       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 00:14:20.059958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 00:14:34.937097       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1208 00:14:50.023371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1208 00:14:50.067794       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef] <==
	I1208 00:13:51.698643       1 server_linux.go:53] "Using iptables proxy"
	I1208 00:13:51.786125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 00:13:51.903189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 00:13:51.903227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1208 00:13:51.903313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 00:13:51.971703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 00:13:51.971765       1 server_linux.go:132] "Using iptables Proxier"
	I1208 00:13:51.981462       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 00:13:51.981801       1 server.go:527] "Version info" version="v1.34.2"
	I1208 00:13:51.981825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 00:13:51.988898       1 config.go:200] "Starting service config controller"
	I1208 00:13:51.988930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 00:13:51.988954       1 config.go:106] "Starting endpoint slice config controller"
	I1208 00:13:51.988958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 00:13:51.988970       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 00:13:51.988987       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 00:13:51.995944       1 config.go:309] "Starting node config controller"
	I1208 00:13:51.995969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 00:13:51.995977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 00:13:52.089453       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 00:13:52.089489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 00:13:52.089540       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b] <==
	E1208 00:13:43.001732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 00:13:43.001783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 00:13:43.001836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 00:13:43.001894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 00:13:43.001944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 00:13:43.001991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 00:13:43.002036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 00:13:43.002100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 00:13:43.002157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 00:13:43.002203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 00:13:43.002231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 00:13:43.832991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 00:13:43.841698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 00:13:43.874942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 00:13:43.887957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1208 00:13:43.977802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1208 00:13:44.004296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 00:13:44.055165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 00:13:44.073659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 00:13:44.084500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 00:13:44.105902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 00:13:44.173289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 00:13:44.182981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 00:13:44.200735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1208 00:13:46.732128       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 00:15:12 addons-429840 kubelet[1266]: I1208 00:15:12.289921    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9vjr9" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:15:12 addons-429840 kubelet[1266]: I1208 00:15:12.308153    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-g6445" podStartSLOduration=5.068036464 podStartE2EDuration="40.308132397s" podCreationTimestamp="2025-12-08 00:14:32 +0000 UTC" firstStartedPulling="2025-12-08 00:14:33.498452627 +0000 UTC m=+48.010497702" lastFinishedPulling="2025-12-08 00:15:08.738548469 +0000 UTC m=+83.250593635" observedRunningTime="2025-12-08 00:15:09.292863557 +0000 UTC m=+83.804908640" watchObservedRunningTime="2025-12-08 00:15:12.308132397 +0000 UTC m=+86.820177480"
	Dec 08 00:15:12 addons-429840 kubelet[1266]: I1208 00:15:12.308313    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-9vjr9" podStartSLOduration=1.7570951689999998 podStartE2EDuration="40.308306331s" podCreationTimestamp="2025-12-08 00:14:32 +0000 UTC" firstStartedPulling="2025-12-08 00:14:33.509419117 +0000 UTC m=+48.021464192" lastFinishedPulling="2025-12-08 00:15:12.060630279 +0000 UTC m=+86.572675354" observedRunningTime="2025-12-08 00:15:12.306969882 +0000 UTC m=+86.819014990" watchObservedRunningTime="2025-12-08 00:15:12.308306331 +0000 UTC m=+86.820351406"
	Dec 08 00:15:13 addons-429840 kubelet[1266]: I1208 00:15:13.296131    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9vjr9" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 00:15:19 addons-429840 kubelet[1266]: I1208 00:15:19.628932    1266 scope.go:117] "RemoveContainer" containerID="919fa38a584f77b6dbad2b9dc4a75e136de6575afd95884f3b80b06c5c0ed03e"
	Dec 08 00:15:20 addons-429840 kubelet[1266]: I1208 00:15:20.351053    1266 scope.go:117] "RemoveContainer" containerID="919fa38a584f77b6dbad2b9dc4a75e136de6575afd95884f3b80b06c5c0ed03e"
	Dec 08 00:15:20 addons-429840 kubelet[1266]: I1208 00:15:20.371390    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-c4kp7" podStartSLOduration=69.230742537 podStartE2EDuration="1m24.371369854s" podCreationTimestamp="2025-12-08 00:13:56 +0000 UTC" firstStartedPulling="2025-12-08 00:15:01.497274578 +0000 UTC m=+76.009319661" lastFinishedPulling="2025-12-08 00:15:16.637901821 +0000 UTC m=+91.149946978" observedRunningTime="2025-12-08 00:15:17.353402207 +0000 UTC m=+91.865447356" watchObservedRunningTime="2025-12-08 00:15:20.371369854 +0000 UTC m=+94.883414937"
	Dec 08 00:15:20 addons-429840 kubelet[1266]: I1208 00:15:20.681818    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-sfdxb" podStartSLOduration=65.624732684 podStartE2EDuration="1m20.681796332s" podCreationTimestamp="2025-12-08 00:14:00 +0000 UTC" firstStartedPulling="2025-12-08 00:15:04.644277381 +0000 UTC m=+79.156322456" lastFinishedPulling="2025-12-08 00:15:19.701340947 +0000 UTC m=+94.213386104" observedRunningTime="2025-12-08 00:15:20.407068201 +0000 UTC m=+94.919113292" watchObservedRunningTime="2025-12-08 00:15:20.681796332 +0000 UTC m=+95.193841464"
	Dec 08 00:15:21 addons-429840 kubelet[1266]: I1208 00:15:21.579292    1266 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27smd\" (UniqueName: \"kubernetes.io/projected/b59f2b6f-bf6d-4662-9247-3dc96ca9beef-kube-api-access-27smd\") pod \"b59f2b6f-bf6d-4662-9247-3dc96ca9beef\" (UID: \"b59f2b6f-bf6d-4662-9247-3dc96ca9beef\") "
	Dec 08 00:15:21 addons-429840 kubelet[1266]: I1208 00:15:21.590525    1266 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59f2b6f-bf6d-4662-9247-3dc96ca9beef-kube-api-access-27smd" (OuterVolumeSpecName: "kube-api-access-27smd") pod "b59f2b6f-bf6d-4662-9247-3dc96ca9beef" (UID: "b59f2b6f-bf6d-4662-9247-3dc96ca9beef"). InnerVolumeSpecName "kube-api-access-27smd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 08 00:15:21 addons-429840 kubelet[1266]: I1208 00:15:21.680112    1266 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27smd\" (UniqueName: \"kubernetes.io/projected/b59f2b6f-bf6d-4662-9247-3dc96ca9beef-kube-api-access-27smd\") on node \"addons-429840\" DevicePath \"\""
	Dec 08 00:15:22 addons-429840 kubelet[1266]: I1208 00:15:22.440387    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a6d9e7d66fb8755f2b32a542e5ec17f928f5aaf0761ad39e65d91dc2f3ecf5"
	Dec 08 00:15:28 addons-429840 kubelet[1266]: I1208 00:15:28.805660    1266 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 08 00:15:28 addons-429840 kubelet[1266]: I1208 00:15:28.805708    1266 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 08 00:15:31 addons-429840 kubelet[1266]: I1208 00:15:31.513144    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-p78l4" podStartSLOduration=73.988158105 podStartE2EDuration="1m35.51312745s" podCreationTimestamp="2025-12-08 00:13:56 +0000 UTC" firstStartedPulling="2025-12-08 00:15:04.756778371 +0000 UTC m=+79.268823446" lastFinishedPulling="2025-12-08 00:15:26.281747716 +0000 UTC m=+100.793792791" observedRunningTime="2025-12-08 00:15:26.489110873 +0000 UTC m=+101.001155956" watchObservedRunningTime="2025-12-08 00:15:31.51312745 +0000 UTC m=+106.025172525"
	Dec 08 00:15:34 addons-429840 kubelet[1266]: I1208 00:15:34.239631    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-q66vl" podStartSLOduration=4.456259361 podStartE2EDuration="1m2.239609656s" podCreationTimestamp="2025-12-08 00:14:32 +0000 UTC" firstStartedPulling="2025-12-08 00:14:33.456291386 +0000 UTC m=+47.968336460" lastFinishedPulling="2025-12-08 00:15:31.239641681 +0000 UTC m=+105.751686755" observedRunningTime="2025-12-08 00:15:31.517516222 +0000 UTC m=+106.029561321" watchObservedRunningTime="2025-12-08 00:15:34.239609656 +0000 UTC m=+108.751654730"
	Dec 08 00:15:34 addons-429840 kubelet[1266]: I1208 00:15:34.284686    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/afc3fed4-8cf3-419c-98aa-f797fd69ab0e-gcp-creds\") pod \"busybox\" (UID: \"afc3fed4-8cf3-419c-98aa-f797fd69ab0e\") " pod="default/busybox"
	Dec 08 00:15:34 addons-429840 kubelet[1266]: I1208 00:15:34.284914    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ck2c\" (UniqueName: \"kubernetes.io/projected/afc3fed4-8cf3-419c-98aa-f797fd69ab0e-kube-api-access-2ck2c\") pod \"busybox\" (UID: \"afc3fed4-8cf3-419c-98aa-f797fd69ab0e\") " pod="default/busybox"
	Dec 08 00:15:34 addons-429840 kubelet[1266]: W1208 00:15:34.581226    1266 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4788dff0a9c0771ce27651862a62b05268cb6eb8e3054ea8dd1be4ba369e5e3e/crio-6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443 WatchSource:0}: Error finding container 6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443: Status 404 returned error can't find the container with id 6a4afe108103e89ca7f017cfa2573aa11ea6bb003e5448e1c34e5031a8b33443
	Dec 08 00:15:35 addons-429840 kubelet[1266]: I1208 00:15:35.632385    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2a7a193-c03e-4e69-bdcf-d0409a199e6d" path="/var/lib/kubelet/pods/b2a7a193-c03e-4e69-bdcf-d0409a199e6d/volumes"
	Dec 08 00:15:36 addons-429840 kubelet[1266]: E1208 00:15:36.503965    1266 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 08 00:15:36 addons-429840 kubelet[1266]: E1208 00:15:36.504061    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/314d2c2e-10b3-42ca-9055-ee48f7ce3891-gcr-creds podName:314d2c2e-10b3-42ca-9055-ee48f7ce3891 nodeName:}" failed. No retries permitted until 2025-12-08 00:16:40.504041183 +0000 UTC m=+175.016086266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/314d2c2e-10b3-42ca-9055-ee48f7ce3891-gcr-creds") pod "registry-creds-764b6fb674-2h5gp" (UID: "314d2c2e-10b3-42ca-9055-ee48f7ce3891") : secret "registry-creds-gcr" not found
	Dec 08 00:15:45 addons-429840 kubelet[1266]: I1208 00:15:45.619445    1266 scope.go:117] "RemoveContainer" containerID="f2713b130da4f39092806d080fd9edd3d2351ad33f445928e95f41c6302cdf1b"
	Dec 08 00:15:45 addons-429840 kubelet[1266]: E1208 00:15:45.762544    1266 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fd8665cbb2fcfb11271487b3d8991c4e60cd018cb6c91b9469cd197102bc989a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fd8665cbb2fcfb11271487b3d8991c4e60cd018cb6c91b9469cd197102bc989a/diff: no such file or directory, extraDiskErr: <nil>
	Dec 08 00:15:45 addons-429840 kubelet[1266]: E1208 00:15:45.777513    1266 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f0696ba4e0ba94151302b3fdc2f26b5943b0d5be2ecd23b755cb23d640d7ef47/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f0696ba4e0ba94151302b3fdc2f26b5943b0d5be2ecd23b755cb23d640d7ef47/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-mhc8w_b59f2b6f-bf6d-4662-9247-3dc96ca9beef/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-mhc8w_b59f2b6f-bf6d-4662-9247-3dc96ca9beef/patch/1.log: no such file or directory
	
	
	==> storage-provisioner [f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c] <==
	W1208 00:15:21.672763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:23.676493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:23.681354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:25.684459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:25.689260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:27.692525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:27.697248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:29.700341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:29.705122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:31.707886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:31.712356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:33.716129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:33.721449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:35.725055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:35.729913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:37.732439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:37.736749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:39.740417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:39.744926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:41.747486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:41.754006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:43.757469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:43.762043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:45.765428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 00:15:45.770639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-429840 -n addons-429840
helpers_test.go:269: (dbg) Run:  kubectl --context addons-429840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-mhc8w ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7 registry-creds-764b6fb674-2h5gp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-429840 describe pod gcp-auth-certs-patch-mhc8w ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7 registry-creds-764b6fb674-2h5gp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-429840 describe pod gcp-auth-certs-patch-mhc8w ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7 registry-creds-764b6fb674-2h5gp: exit status 1 (87.268776ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-mhc8w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-226t7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qqch7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2h5gp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-429840 describe pod gcp-auth-certs-patch-mhc8w ingress-nginx-admission-create-226t7 ingress-nginx-admission-patch-qqch7 registry-creds-764b6fb674-2h5gp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable headlamp --alsologtostderr -v=1: exit status 11 (281.923065ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:15:48.288362  799360 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:15:48.289260  799360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:48.289316  799360 out.go:374] Setting ErrFile to fd 2...
	I1208 00:15:48.289339  799360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:15:48.289646  799360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:15:48.289989  799360 mustload.go:66] Loading cluster: addons-429840
	I1208 00:15:48.290462  799360 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:48.290511  799360 addons.go:622] checking whether the cluster is paused
	I1208 00:15:48.290651  799360 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:15:48.290689  799360 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:15:48.291310  799360 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:15:48.309837  799360 ssh_runner.go:195] Run: systemctl --version
	I1208 00:15:48.309896  799360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:15:48.329489  799360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:15:48.437554  799360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:15:48.437662  799360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:15:48.470923  799360 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:15:48.470950  799360 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:15:48.470956  799360 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:15:48.470965  799360 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:15:48.470969  799360 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:15:48.470975  799360 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:15:48.470979  799360 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:15:48.470982  799360 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:15:48.470985  799360 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:15:48.470990  799360 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:15:48.470994  799360 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:15:48.470997  799360 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:15:48.471000  799360 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:15:48.471003  799360 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:15:48.471007  799360 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:15:48.471012  799360 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:15:48.471015  799360 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:15:48.471019  799360 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:15:48.471022  799360 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:15:48.471025  799360 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:15:48.471030  799360 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:15:48.471037  799360 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:15:48.471040  799360 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:15:48.471044  799360 cri.go:89] found id: ""
	I1208 00:15:48.471101  799360 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:15:48.486504  799360 out.go:203] 
	W1208 00:15:48.489320  799360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:15:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:15:48.489348  799360 out.go:285] * 
	* 
	W1208 00:15:48.495837  799360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:15:48.498777  799360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-d4kr8" [61dab6c2-ca28-4d51-930c-0907e19c1cd1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004420681s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (317.162977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:17:06.742094  801412 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:17:06.742961  801412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:17:06.742975  801412 out.go:374] Setting ErrFile to fd 2...
	I1208 00:17:06.742987  801412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:17:06.743486  801412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:17:06.743822  801412 mustload.go:66] Loading cluster: addons-429840
	I1208 00:17:06.744543  801412 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:17:06.744572  801412 addons.go:622] checking whether the cluster is paused
	I1208 00:17:06.744763  801412 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:17:06.744783  801412 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:17:06.745603  801412 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:17:06.766946  801412 ssh_runner.go:195] Run: systemctl --version
	I1208 00:17:06.767030  801412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:17:06.787150  801412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:17:06.904236  801412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:17:06.904339  801412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:17:06.943457  801412 cri.go:89] found id: "1a1e746aca88aecfc4afacb1115769a107968b6dc5b24950e45fa011872de8b4"
	I1208 00:17:06.943494  801412 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:17:06.943500  801412 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:17:06.943504  801412 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:17:06.943508  801412 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:17:06.943511  801412 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:17:06.943514  801412 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:17:06.943517  801412 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:17:06.943520  801412 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:17:06.943531  801412 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:17:06.943535  801412 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:17:06.943538  801412 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:17:06.943541  801412 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:17:06.943545  801412 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:17:06.943551  801412 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:17:06.943560  801412 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:17:06.943568  801412 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:17:06.943572  801412 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:17:06.943575  801412 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:17:06.943579  801412 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:17:06.943583  801412 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:17:06.943586  801412 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:17:06.943589  801412 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:17:06.943591  801412 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:17:06.943595  801412 cri.go:89] found id: ""
	I1208 00:17:06.943653  801412 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:17:06.962539  801412 out.go:203] 
	W1208 00:17:06.965683  801412 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:17:06.965717  801412 out.go:285] * 
	* 
	W1208 00:17:06.972874  801412 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:17:06.976064  801412 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-429840 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-429840 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [9d3c5ddb-d7d0-4d2c-a897-9045fb2ea30e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [9d3c5ddb-d7d0-4d2c-a897-9045fb2ea30e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [9d3c5ddb-d7d0-4d2c-a897-9045fb2ea30e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003314014s
addons_test.go:967: (dbg) Run:  kubectl --context addons-429840 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 ssh "cat /opt/local-path-provisioner/pvc-12f50409-998e-4c25-af97-bb31f5aacd15_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-429840 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-429840 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (683.189836ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:17:00.123777  801278 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:17:00.126392  801278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:17:00.126415  801278 out.go:374] Setting ErrFile to fd 2...
	I1208 00:17:00.126423  801278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:17:00.126911  801278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:17:00.127387  801278 mustload.go:66] Loading cluster: addons-429840
	I1208 00:17:00.129975  801278 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:17:00.133006  801278 addons.go:622] checking whether the cluster is paused
	I1208 00:17:00.133258  801278 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:17:00.133274  801278 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:17:00.134059  801278 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:17:00.286433  801278 ssh_runner.go:195] Run: systemctl --version
	I1208 00:17:00.286726  801278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:17:00.338073  801278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:17:00.506783  801278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:17:00.506904  801278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:17:00.595113  801278 cri.go:89] found id: "cbf8c8b5c397cddfc0b5dda0621b62391aa2a4152478a965c70a915241aaa294"
	I1208 00:17:00.595139  801278 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:17:00.595146  801278 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:17:00.595150  801278 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:17:00.595154  801278 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:17:00.595158  801278 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:17:00.595191  801278 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:17:00.595195  801278 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:17:00.595199  801278 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:17:00.595206  801278 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:17:00.595215  801278 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:17:00.595219  801278 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:17:00.595224  801278 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:17:00.595228  801278 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:17:00.595231  801278 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:17:00.595261  801278 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:17:00.595267  801278 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:17:00.595272  801278 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:17:00.595276  801278 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:17:00.595279  801278 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:17:00.595283  801278 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:17:00.595286  801278 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:17:00.595290  801278 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:17:00.595293  801278 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:17:00.595297  801278 cri.go:89] found id: ""
	I1208 00:17:00.595359  801278 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:17:00.629366  801278 out.go:203] 
	W1208 00:17:00.632883  801278 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:17:00.632906  801278 out.go:285] * 
	* 
	W1208 00:17:00.639844  801278 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:17:00.643826  801278 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-g6445" [2e94dca4-cf09-452e-b309-6176e09f4387] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005920392s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (296.051902ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:45.310646  800818 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:45.316298  800818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:45.316384  800818 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:45.316427  800818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:45.316752  800818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:45.317135  800818 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:45.317604  800818 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:45.317649  800818 addons.go:622] checking whether the cluster is paused
	I1208 00:16:45.317803  800818 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:45.317834  800818 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:45.318457  800818 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:45.341262  800818 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:45.341316  800818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:45.369911  800818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:45.481329  800818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:45.481414  800818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:45.509948  800818 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:45.509976  800818 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:45.509981  800818 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:45.509989  800818 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:45.509992  800818 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:45.509997  800818 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:45.510000  800818 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:45.510004  800818 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:45.510007  800818 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:45.510013  800818 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:45.510017  800818 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:45.510020  800818 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:45.510022  800818 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:45.510025  800818 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:45.510028  800818 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:45.510033  800818 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:45.510039  800818 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:45.510043  800818 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:45.510046  800818 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:45.510049  800818 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:45.510053  800818 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:45.510056  800818 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:45.510060  800818 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:45.510063  800818 cri.go:89] found id: ""
	I1208 00:16:45.510121  800818 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:45.524594  800818 out.go:203] 
	W1208 00:16:45.527494  800818 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:45.527516  800818 out.go:285] * 
	* 
	W1208 00:16:45.533821  800818 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:45.536899  800818 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2jf6z" [a9842021-3c03-44e6-87f2-f563fa105120] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004219975s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-429840 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-429840 addons disable yakd --alsologtostderr -v=1: exit status 11 (272.492331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:16:51.601999  800969 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:16:51.602804  800969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:51.602818  800969 out.go:374] Setting ErrFile to fd 2...
	I1208 00:16:51.602823  800969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:16:51.603118  800969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:16:51.603410  800969 mustload.go:66] Loading cluster: addons-429840
	I1208 00:16:51.603792  800969 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:51.603809  800969 addons.go:622] checking whether the cluster is paused
	I1208 00:16:51.603918  800969 config.go:182] Loaded profile config "addons-429840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:16:51.603944  800969 host.go:66] Checking if "addons-429840" exists ...
	I1208 00:16:51.604463  800969 cli_runner.go:164] Run: docker container inspect addons-429840 --format={{.State.Status}}
	I1208 00:16:51.623939  800969 ssh_runner.go:195] Run: systemctl --version
	I1208 00:16:51.623996  800969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-429840
	I1208 00:16:51.645609  800969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/addons-429840/id_rsa Username:docker}
	I1208 00:16:51.753370  800969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:16:51.753488  800969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:16:51.785139  800969 cri.go:89] found id: "62727839aa55504c4a2abfb73d1bba764b7a3925a8392124663832d69ab5f834"
	I1208 00:16:51.785164  800969 cri.go:89] found id: "51022c4a75880d0b70ca532d9e31d08f4124f0034600198b222780f53bed5783"
	I1208 00:16:51.785169  800969 cri.go:89] found id: "16ef54fec815f267701d14a5471e87eedb4db314352690509c4401c52c13b4ea"
	I1208 00:16:51.785173  800969 cri.go:89] found id: "d7a58ce04c20ddb53d48c150a8581849f342e016eb12fb37c036732c4c506641"
	I1208 00:16:51.785176  800969 cri.go:89] found id: "9a5a7433c66102c3f9e8b4d2295596fa8b6a4190548ed93265d96942dba932a3"
	I1208 00:16:51.785184  800969 cri.go:89] found id: "22bde17100e414ea977c5edca4b564e5fd36bb7ae0fcb6bb3ed7292c78640771"
	I1208 00:16:51.785208  800969 cri.go:89] found id: "56946171c705ebc45b785c364408146d1bb9071d57fcb24952f2802cba70a03b"
	I1208 00:16:51.785212  800969 cri.go:89] found id: "aec26cd72a02ea369fe6abeb7a1ac56cc4cbd8a132102a2583b10a008c24a7e6"
	I1208 00:16:51.785215  800969 cri.go:89] found id: "3fd9d7897b3faf6bef37c4357120ca254991435b9338010058a2d3f49b4cb8e9"
	I1208 00:16:51.785224  800969 cri.go:89] found id: "8eec18b6c152f230a391702ae0f832f2b6b66d2aa7c627933f04ad7eb69fabe3"
	I1208 00:16:51.785240  800969 cri.go:89] found id: "eb7e7a7efc0438324ef9bdc35c26910f4892ced8a59b98030a076263e79fced0"
	I1208 00:16:51.785244  800969 cri.go:89] found id: "0695bc22a1299e8fbc8f580969ffd6da1d68a0bd852a8860ae25336dc9ce65c5"
	I1208 00:16:51.785247  800969 cri.go:89] found id: "777823a1b3e68a954d763775dfb9cfcfebe3cdc7385c185ed25b34cebce54290"
	I1208 00:16:51.785251  800969 cri.go:89] found id: "220ce0d6bf3d5c45ebcee116c47e7cccfc44039fb78b66c7c61cbf17a1966713"
	I1208 00:16:51.785255  800969 cri.go:89] found id: "91a9d71fa255875aeffb71322ff665888bd5c16176c67d8a13939b3efe8e349d"
	I1208 00:16:51.785264  800969 cri.go:89] found id: "25f99dffaa8ed534ebe3f2e2378622bd7b91ead082457db575b782301aa32697"
	I1208 00:16:51.785284  800969 cri.go:89] found id: "87d0a5e3d7fbbf6e861301f7e5ea2e2e0a39ffa591118e6ffd6a19cebe472960"
	I1208 00:16:51.785301  800969 cri.go:89] found id: "f877c300e548d982a80542a521f83eab7a421218b8daa900c28c9d17781d355c"
	I1208 00:16:51.785305  800969 cri.go:89] found id: "49a9b28a6451913b2a2e073a8e21e9bdf44f089a868ef90ed0c87e741f8d0bf1"
	I1208 00:16:51.785308  800969 cri.go:89] found id: "1c7ec16efebcb5404dbd1ab28bc62ece21b6fd2e508db62c8f551971b9f2a4ef"
	I1208 00:16:51.785313  800969 cri.go:89] found id: "8bf8d2ee6f616da6eb946ff0e6dcb6dd4bd32f364c70a3f216c5fc05356c150f"
	I1208 00:16:51.785320  800969 cri.go:89] found id: "2fba6529a9c348ac9a41ac0079faabaed203f0b81d5f5ac1c7cce34e3c52219b"
	I1208 00:16:51.785323  800969 cri.go:89] found id: "01230e11e24c34b7d99f8bb076fb9fa7789211e8f55622f4967581f0d6768933"
	I1208 00:16:51.785326  800969 cri.go:89] found id: "92f126df047be934b717acb53a50571488eeb54b5f51d92471a478e3387e1685"
	I1208 00:16:51.785329  800969 cri.go:89] found id: ""
	I1208 00:16:51.785393  800969 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 00:16:51.800940  800969 out.go:203] 
	W1208 00:16:51.803951  800969 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 00:16:51.803972  800969 out.go:285] * 
	* 
	W1208 00:16:51.810318  800969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:16:51.813597  800969 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-429840 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 00:25:34.383073  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:02.091014  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.336139  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.342917  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.354348  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.375770  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.417252  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.498710  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.660214  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:46.981971  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:47.624142  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:48.905855  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:51.468831  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:56.590836  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:28:06.833018  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:28:27.314989  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:29:08.276753  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:30:30.198121  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:30:34.381953  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.132858372s)

                                                
                                                
-- stdout --
	* [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:37175
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:37175 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000329177s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001527831s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001527831s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 6 (359.911563ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 00:32:01.579449  826036 status.go:458] kubeconfig endpoint: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-714395 ssh sudo cat /etc/ssl/certs/7918072.pem                                                                                                 │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /usr/share/ca-certificates/7918072.pem                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save kicbase/echo-server:functional-714395 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image rm kicbase/echo-server:functional-714395 --alsologtostderr                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format short --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format yaml --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format json --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format table --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh pgrep buildkitd                                                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image          │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                                    │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete         │ -p functional-714395                                                                                                                                      │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start          │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:23:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:23:40.118749  820476 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:23:40.118936  820476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:40.118940  820476 out.go:374] Setting ErrFile to fd 2...
	I1208 00:23:40.118945  820476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:40.119191  820476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:23:40.119597  820476 out.go:368] Setting JSON to false
	I1208 00:23:40.120397  820476 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18353,"bootTime":1765135068,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:23:40.120453  820476 start.go:143] virtualization:  
	I1208 00:23:40.125053  820476 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:23:40.129754  820476 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:23:40.129848  820476 notify.go:221] Checking for updates...
	I1208 00:23:40.136664  820476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:23:40.140007  820476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:23:40.143208  820476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:23:40.146272  820476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:23:40.149501  820476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:23:40.152881  820476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:23:40.189342  820476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:23:40.189468  820476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:23:40.246353  820476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:23:40.236484543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:23:40.246455  820476 docker.go:319] overlay module found
	I1208 00:23:40.249964  820476 out.go:179] * Using the docker driver based on user configuration
	I1208 00:23:40.252953  820476 start.go:309] selected driver: docker
	I1208 00:23:40.252961  820476 start.go:927] validating driver "docker" against <nil>
	I1208 00:23:40.252974  820476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:23:40.253726  820476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:23:40.315541  820476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:23:40.306618749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:23:40.315680  820476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:23:40.315898  820476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:23:40.319003  820476 out.go:179] * Using Docker driver with root privileges
	I1208 00:23:40.322002  820476 cni.go:84] Creating CNI manager for ""
	I1208 00:23:40.322067  820476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:23:40.322076  820476 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:23:40.322165  820476 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:23:40.327268  820476 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:23:40.330242  820476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:23:40.333324  820476 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:23:40.336289  820476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:23:40.336368  820476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:23:40.336387  820476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:23:40.336395  820476 cache.go:65] Caching tarball of preloaded images
	I1208 00:23:40.336486  820476 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:23:40.336494  820476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:23:40.336829  820476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:23:40.336848  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json: {Name:mkf39c9b8dfe933061b1647719d1218129a6847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:40.356438  820476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:23:40.356452  820476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:23:40.356466  820476 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:23:40.356497  820476 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:23:40.356597  820476 start.go:364] duration metric: took 85.802µs to acquireMachinesLock for "functional-525396"
	I1208 00:23:40.356621  820476 start.go:93] Provisioning new machine with config: &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:23:40.356690  820476 start.go:125] createHost starting for "" (driver="docker")
	I1208 00:23:40.360065  820476 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1208 00:23:40.360342  820476 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:37175 to docker env.
	I1208 00:23:40.360367  820476 start.go:159] libmachine.API.Create for "functional-525396" (driver="docker")
	I1208 00:23:40.360386  820476 client.go:173] LocalClient.Create starting
	I1208 00:23:40.360449  820476 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 00:23:40.360479  820476 main.go:143] libmachine: Decoding PEM data...
	I1208 00:23:40.360492  820476 main.go:143] libmachine: Parsing certificate...
	I1208 00:23:40.360554  820476 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 00:23:40.360572  820476 main.go:143] libmachine: Decoding PEM data...
	I1208 00:23:40.360591  820476 main.go:143] libmachine: Parsing certificate...
	I1208 00:23:40.360964  820476 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 00:23:40.376935  820476 cli_runner.go:211] docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 00:23:40.377009  820476 network_create.go:284] running [docker network inspect functional-525396] to gather additional debugging logs...
	I1208 00:23:40.377025  820476 cli_runner.go:164] Run: docker network inspect functional-525396
	W1208 00:23:40.392709  820476 cli_runner.go:211] docker network inspect functional-525396 returned with exit code 1
	I1208 00:23:40.392729  820476 network_create.go:287] error running [docker network inspect functional-525396]: docker network inspect functional-525396: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-525396 not found
	I1208 00:23:40.392742  820476 network_create.go:289] output of [docker network inspect functional-525396]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-525396 not found
	
	** /stderr **
	I1208 00:23:40.392843  820476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:23:40.409187  820476 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001967b00}
	I1208 00:23:40.409223  820476 network_create.go:124] attempt to create docker network functional-525396 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 00:23:40.409288  820476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-525396 functional-525396
	I1208 00:23:40.462525  820476 network_create.go:108] docker network functional-525396 192.168.49.0/24 created
	I1208 00:23:40.462547  820476 kic.go:121] calculated static IP "192.168.49.2" for the "functional-525396" container
	I1208 00:23:40.462620  820476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 00:23:40.478960  820476 cli_runner.go:164] Run: docker volume create functional-525396 --label name.minikube.sigs.k8s.io=functional-525396 --label created_by.minikube.sigs.k8s.io=true
	I1208 00:23:40.498615  820476 oci.go:103] Successfully created a docker volume functional-525396
	I1208 00:23:40.498687  820476 cli_runner.go:164] Run: docker run --rm --name functional-525396-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-525396 --entrypoint /usr/bin/test -v functional-525396:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 00:23:41.026160  820476 oci.go:107] Successfully prepared a docker volume functional-525396
	I1208 00:23:41.026216  820476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:23:41.026223  820476 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 00:23:41.026313  820476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-525396:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 00:23:45.063860  820476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-525396:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.037491213s)
	I1208 00:23:45.063886  820476 kic.go:203] duration metric: took 4.037657491s to extract preloaded images to volume ...
	W1208 00:23:45.064121  820476 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 00:23:45.064249  820476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 00:23:45.133435  820476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-525396 --name functional-525396 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-525396 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-525396 --network functional-525396 --ip 192.168.49.2 --volume functional-525396:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 00:23:45.475308  820476 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Running}}
	I1208 00:23:45.497144  820476 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:23:45.525364  820476 cli_runner.go:164] Run: docker exec functional-525396 stat /var/lib/dpkg/alternatives/iptables
	I1208 00:23:45.577473  820476 oci.go:144] the created container "functional-525396" has a running status.
	I1208 00:23:45.577492  820476 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa...
	I1208 00:23:45.745174  820476 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 00:23:45.772640  820476 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:23:45.803649  820476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 00:23:45.803661  820476 kic_runner.go:114] Args: [docker exec --privileged functional-525396 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 00:23:45.874407  820476 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:23:45.905762  820476 machine.go:94] provisionDockerMachine start ...
	I1208 00:23:45.905844  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:45.928417  820476 main.go:143] libmachine: Using SSH client type: native
	I1208 00:23:45.928751  820476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:23:45.928758  820476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:23:45.930950  820476 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 00:23:49.082530  820476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:23:49.082545  820476 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:23:49.082616  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:49.100532  820476 main.go:143] libmachine: Using SSH client type: native
	I1208 00:23:49.100852  820476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:23:49.100860  820476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:23:49.260004  820476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:23:49.260083  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:49.277832  820476 main.go:143] libmachine: Using SSH client type: native
	I1208 00:23:49.278157  820476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:23:49.278171  820476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:23:49.435050  820476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:23:49.435068  820476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:23:49.435100  820476 ubuntu.go:190] setting up certificates
	I1208 00:23:49.435108  820476 provision.go:84] configureAuth start
	I1208 00:23:49.435170  820476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:23:49.452817  820476 provision.go:143] copyHostCerts
	I1208 00:23:49.452878  820476 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:23:49.452885  820476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:23:49.452968  820476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:23:49.453052  820476 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:23:49.453056  820476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:23:49.453079  820476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:23:49.453129  820476 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:23:49.453133  820476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:23:49.453155  820476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:23:49.453197  820476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:23:49.634798  820476 provision.go:177] copyRemoteCerts
	I1208 00:23:49.634894  820476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:23:49.634940  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:49.658028  820476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:23:49.762749  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:23:49.780003  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 00:23:49.797793  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:23:49.815529  820476 provision.go:87] duration metric: took 380.397756ms to configureAuth
	I1208 00:23:49.815547  820476 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:23:49.815749  820476 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:23:49.815849  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:49.833180  820476 main.go:143] libmachine: Using SSH client type: native
	I1208 00:23:49.833499  820476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:23:49.833511  820476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:23:50.153388  820476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:23:50.153401  820476 machine.go:97] duration metric: took 4.247626836s to provisionDockerMachine
	I1208 00:23:50.153431  820476 client.go:176] duration metric: took 9.793040316s to LocalClient.Create
	I1208 00:23:50.153449  820476 start.go:167] duration metric: took 9.79308263s to libmachine.API.Create "functional-525396"
	I1208 00:23:50.153456  820476 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:23:50.153471  820476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:23:50.153541  820476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:23:50.153588  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:50.173510  820476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:23:50.282882  820476 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:23:50.286118  820476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:23:50.286135  820476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:23:50.286154  820476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:23:50.286209  820476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:23:50.286293  820476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:23:50.286378  820476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:23:50.286423  820476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:23:50.293974  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:23:50.310942  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:23:50.328393  820476 start.go:296] duration metric: took 174.922213ms for postStartSetup
	I1208 00:23:50.328768  820476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:23:50.345539  820476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:23:50.345814  820476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:23:50.345854  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:50.364827  820476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:23:50.467905  820476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:23:50.472438  820476 start.go:128] duration metric: took 10.115734918s to createHost
	I1208 00:23:50.472452  820476 start.go:83] releasing machines lock for "functional-525396", held for 10.115848897s
	I1208 00:23:50.472526  820476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:23:50.494058  820476 out.go:179] * Found network options:
	I1208 00:23:50.497107  820476 out.go:179]   - HTTP_PROXY=localhost:37175
	W1208 00:23:50.500168  820476 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1208 00:23:50.503208  820476 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1208 00:23:50.506211  820476 ssh_runner.go:195] Run: cat /version.json
	I1208 00:23:50.506260  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:50.506259  820476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:23:50.506339  820476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:23:50.524310  820476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:23:50.527910  820476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:23:50.713327  820476 ssh_runner.go:195] Run: systemctl --version
	I1208 00:23:50.719745  820476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:23:50.755787  820476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:23:50.760139  820476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:23:50.760203  820476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:23:50.788285  820476 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 00:23:50.788298  820476 start.go:496] detecting cgroup driver to use...
	I1208 00:23:50.788342  820476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:23:50.788392  820476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:23:50.809549  820476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:23:50.823808  820476 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:23:50.823872  820476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:23:50.842467  820476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:23:50.861682  820476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:23:50.981221  820476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:23:51.110307  820476 docker.go:234] disabling docker service ...
	I1208 00:23:51.110372  820476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:23:51.133410  820476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:23:51.147635  820476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:23:51.265978  820476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:23:51.385237  820476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:23:51.398801  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:23:51.413604  820476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:23:51.413660  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.422528  820476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:23:51.422612  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.431840  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.440996  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.450734  820476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:23:51.459634  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.468977  820476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.482979  820476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:23:51.492191  820476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:23:51.500011  820476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:23:51.507867  820476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:23:51.627745  820476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:23:51.805412  820476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:23:51.805482  820476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:23:51.809467  820476 start.go:564] Will wait 60s for crictl version
	I1208 00:23:51.809523  820476 ssh_runner.go:195] Run: which crictl
	I1208 00:23:51.813106  820476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:23:51.840741  820476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:23:51.840828  820476 ssh_runner.go:195] Run: crio --version
	I1208 00:23:51.870224  820476 ssh_runner.go:195] Run: crio --version
	I1208 00:23:51.901672  820476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:23:51.904515  820476 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:23:51.921088  820476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:23:51.924946  820476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:23:51.934547  820476 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:23:51.934652  820476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:23:51.934703  820476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:23:51.968922  820476 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:23:51.968934  820476 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:23:51.968988  820476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:23:51.993965  820476 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:23:51.993977  820476 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:23:51.993984  820476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:23:51.994070  820476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:23:51.994169  820476 ssh_runner.go:195] Run: crio config
	I1208 00:23:52.072374  820476 cni.go:84] Creating CNI manager for ""
	I1208 00:23:52.072384  820476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:23:52.072402  820476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:23:52.072427  820476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:23:52.072552  820476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:23:52.072623  820476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:23:52.081060  820476 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:23:52.081132  820476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:23:52.089243  820476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:23:52.102671  820476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:23:52.116336  820476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:23:52.129641  820476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:23:52.133278  820476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:23:52.143266  820476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:23:52.264427  820476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:23:52.281378  820476 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:23:52.281389  820476 certs.go:195] generating shared ca certs ...
	I1208 00:23:52.281404  820476 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.281549  820476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:23:52.281590  820476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:23:52.281596  820476 certs.go:257] generating profile certs ...
	I1208 00:23:52.281647  820476 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:23:52.281656  820476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt with IP's: []
	I1208 00:23:52.504472  820476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt ...
	I1208 00:23:52.504490  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: {Name:mk3ad4138a6c0d09f3e2b3301eed8fa4d05df575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.504700  820476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key ...
	I1208 00:23:52.504707  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key: {Name:mkf8ab33ce7891629dfac19486ca2ca63183bc07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.504793  820476 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:23:52.504805  820476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt.7790121c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1208 00:23:52.596567  820476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt.7790121c ...
	I1208 00:23:52.596582  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt.7790121c: {Name:mk5995dedea2832d294e6e52b206cd2f9d0429a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.596763  820476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c ...
	I1208 00:23:52.596770  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c: {Name:mkf61d311a522c180942e8c036fa881346c70246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.596870  820476 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt.7790121c -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt
	I1208 00:23:52.596949  820476 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key
	I1208 00:23:52.597009  820476 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:23:52.597020  820476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt with IP's: []
	I1208 00:23:52.961075  820476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt ...
	I1208 00:23:52.961091  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt: {Name:mk6369702194f9e871829ea846ccf735d68b03a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.961281  820476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key ...
	I1208 00:23:52.961291  820476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key: {Name:mkacf80f205e13ac036da53b431aeee27c35060b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:23:52.961490  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:23:52.961532  820476 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:23:52.961542  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:23:52.961567  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:23:52.961589  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:23:52.961613  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:23:52.961655  820476 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:23:52.962235  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:23:52.981128  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:23:53.000314  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:23:53.024122  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:23:53.041865  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:23:53.059762  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:23:53.077420  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:23:53.099141  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:23:53.117695  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:23:53.136474  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:23:53.154178  820476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:23:53.171226  820476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:23:53.184527  820476 ssh_runner.go:195] Run: openssl version
	I1208 00:23:53.190514  820476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:23:53.197691  820476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:23:53.205027  820476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:23:53.208779  820476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:23:53.208836  820476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:23:53.249976  820476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:23:53.257331  820476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 00:23:53.264513  820476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:23:53.271979  820476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:23:53.279283  820476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:23:53.282986  820476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:23:53.283043  820476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:23:53.324174  820476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:23:53.331845  820476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 00:23:53.339322  820476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:23:53.347144  820476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:23:53.354771  820476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:23:53.358756  820476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:23:53.358812  820476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:23:53.400766  820476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:23:53.408352  820476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 00:23:53.416131  820476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:23:53.420179  820476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 00:23:53.420231  820476 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:23:53.420309  820476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:23:53.420370  820476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:23:53.448445  820476 cri.go:89] found id: ""
	I1208 00:23:53.448515  820476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:23:53.456548  820476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:23:53.464528  820476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:23:53.464590  820476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:23:53.472550  820476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:23:53.472559  820476 kubeadm.go:158] found existing configuration files:
	
	I1208 00:23:53.472623  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:23:53.480480  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:23:53.480557  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:23:53.489123  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:23:53.496933  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:23:53.496997  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:23:53.504662  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:23:53.512610  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:23:53.512687  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:23:53.520297  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:23:53.528057  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:23:53.528113  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:23:53.536106  820476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:23:53.575998  820476 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:23:53.576050  820476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:23:53.655105  820476 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:23:53.655168  820476 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:23:53.655203  820476 kubeadm.go:319] OS: Linux
	I1208 00:23:53.655247  820476 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:23:53.655294  820476 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:23:53.655340  820476 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:23:53.655387  820476 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:23:53.655434  820476 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:23:53.655487  820476 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:23:53.655540  820476 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:23:53.655587  820476 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:23:53.655631  820476 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:23:53.735340  820476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:23:53.735444  820476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:23:53.735537  820476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:23:53.744494  820476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:23:53.750679  820476 out.go:252]   - Generating certificates and keys ...
	I1208 00:23:53.750764  820476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:23:53.750828  820476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:23:53.884064  820476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 00:23:54.023600  820476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 00:23:54.201739  820476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 00:23:55.072967  820476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 00:23:55.451425  820476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 00:23:55.451742  820476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:23:55.528970  820476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 00:23:55.529305  820476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:23:56.005834  820476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 00:23:56.176634  820476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 00:23:56.530227  820476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 00:23:56.530462  820476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:23:56.726806  820476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:23:57.066160  820476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:23:57.754119  820476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:23:57.982469  820476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:23:58.564090  820476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:23:58.564772  820476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:23:58.567566  820476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:23:58.571315  820476 out.go:252]   - Booting up control plane ...
	I1208 00:23:58.571413  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:23:58.571489  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:23:58.573559  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:23:58.588974  820476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:23:58.589328  820476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:23:58.597732  820476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:23:58.598094  820476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:23:58.598293  820476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:23:58.727024  820476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:23:58.727136  820476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:27:58.726988  820476 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000329177s
	I1208 00:27:58.727006  820476 kubeadm.go:319] 
	I1208 00:27:58.727059  820476 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:27:58.727089  820476 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:27:58.727187  820476 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:27:58.727190  820476 kubeadm.go:319] 
	I1208 00:27:58.727287  820476 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:27:58.727316  820476 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:27:58.727344  820476 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:27:58.727347  820476 kubeadm.go:319] 
	I1208 00:27:58.732706  820476 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:27:58.733135  820476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:27:58.733245  820476 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:27:58.733476  820476 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:27:58.733480  820476 kubeadm.go:319] 
	I1208 00:27:58.733561  820476 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:27:58.733686  820476 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-525396 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000329177s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:27:58.733773  820476 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:27:59.142720  820476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:27:59.155981  820476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:27:59.156035  820476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:27:59.163925  820476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:27:59.163935  820476 kubeadm.go:158] found existing configuration files:
	
	I1208 00:27:59.163992  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:27:59.171705  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:27:59.171761  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:27:59.179205  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:27:59.187480  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:27:59.187535  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:27:59.194759  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:27:59.202284  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:27:59.202340  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:27:59.209796  820476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:27:59.217411  820476 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:27:59.217470  820476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:27:59.224798  820476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:27:59.261846  820476 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:27:59.262211  820476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:27:59.336510  820476 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:27:59.336577  820476 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:27:59.336610  820476 kubeadm.go:319] OS: Linux
	I1208 00:27:59.336658  820476 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:27:59.336702  820476 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:27:59.336752  820476 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:27:59.336804  820476 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:27:59.336849  820476 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:27:59.336904  820476 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:27:59.336952  820476 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:27:59.336996  820476 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:27:59.337045  820476 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:27:59.407102  820476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:27:59.407207  820476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:27:59.407298  820476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:27:59.415356  820476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:27:59.420747  820476 out.go:252]   - Generating certificates and keys ...
	I1208 00:27:59.420845  820476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:27:59.420914  820476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:27:59.420998  820476 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:27:59.421063  820476 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:27:59.421138  820476 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:27:59.421198  820476 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:27:59.421264  820476 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:27:59.421337  820476 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:27:59.421418  820476 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:27:59.421496  820476 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:27:59.421538  820476 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:27:59.421598  820476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:27:59.556942  820476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:27:59.969049  820476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:28:00.178940  820476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:28:00.399380  820476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:28:00.606001  820476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:28:00.606557  820476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:28:00.609619  820476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:28:00.612844  820476 out.go:252]   - Booting up control plane ...
	I1208 00:28:00.612955  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:28:00.613033  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:28:00.613100  820476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:28:00.629133  820476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:28:00.629237  820476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:28:00.640462  820476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:28:00.641243  820476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:28:00.641642  820476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:28:00.774191  820476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:28:00.774546  820476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:32:00.776572  820476 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001527831s
	I1208 00:32:00.776595  820476 kubeadm.go:319] 
	I1208 00:32:00.776692  820476 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:32:00.776748  820476 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:32:00.777068  820476 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:32:00.777076  820476 kubeadm.go:319] 
	I1208 00:32:00.777256  820476 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:32:00.777309  820476 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:32:00.777616  820476 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:32:00.777621  820476 kubeadm.go:319] 
	I1208 00:32:00.781288  820476 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:32:00.782265  820476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:32:00.782386  820476 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:32:00.782656  820476 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:32:00.782666  820476 kubeadm.go:319] 
	I1208 00:32:00.782774  820476 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:32:00.782819  820476 kubeadm.go:403] duration metric: took 8m7.36259641s to StartCluster
	I1208 00:32:00.782870  820476 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:32:00.782933  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:32:00.807929  820476 cri.go:89] found id: ""
	I1208 00:32:00.807942  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.807949  820476 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:32:00.807955  820476 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:32:00.808020  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:32:00.834348  820476 cri.go:89] found id: ""
	I1208 00:32:00.834361  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.834369  820476 logs.go:284] No container was found matching "etcd"
	I1208 00:32:00.834374  820476 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:32:00.834434  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:32:00.860722  820476 cri.go:89] found id: ""
	I1208 00:32:00.860737  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.860744  820476 logs.go:284] No container was found matching "coredns"
	I1208 00:32:00.860752  820476 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:32:00.860812  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:32:00.887647  820476 cri.go:89] found id: ""
	I1208 00:32:00.887662  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.887670  820476 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:32:00.887675  820476 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:32:00.887739  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:32:00.913084  820476 cri.go:89] found id: ""
	I1208 00:32:00.913098  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.913105  820476 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:32:00.913110  820476 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:32:00.913170  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:32:00.939508  820476 cri.go:89] found id: ""
	I1208 00:32:00.939524  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.939531  820476 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:32:00.939536  820476 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:32:00.939594  820476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:32:00.968764  820476 cri.go:89] found id: ""
	I1208 00:32:00.968777  820476 logs.go:282] 0 containers: []
	W1208 00:32:00.968784  820476 logs.go:284] No container was found matching "kindnet"
	I1208 00:32:00.968792  820476 logs.go:123] Gathering logs for kubelet ...
	I1208 00:32:00.968802  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:32:01.033377  820476 logs.go:123] Gathering logs for dmesg ...
	I1208 00:32:01.033396  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:32:01.050330  820476 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:32:01.050352  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:32:01.118321  820476 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:32:01.109760    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.110482    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.112228    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.112778    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.114305    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:32:01.109760    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.110482    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.112228    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.112778    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:01.114305    4851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:32:01.118332  820476 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:32:01.118343  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:32:01.150400  820476 logs.go:123] Gathering logs for container status ...
	I1208 00:32:01.150420  820476 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1208 00:32:01.179626  820476 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001527831s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:32:01.179672  820476 out.go:285] * 
	W1208 00:32:01.179734  820476 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001527831s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:32:01.179747  820476 out.go:285] * 
	W1208 00:32:01.181900  820476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:32:01.188977  820476 out.go:203] 
	W1208 00:32:01.192758  820476 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001527831s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:32:01.192810  820476 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:32:01.192835  820476 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:32:01.195914  820476 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.799807622Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.799841895Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.799883873Z" level=info msg="Create NRI interface"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.799988104Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.799996235Z" level=info msg="runtime interface created"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.800006721Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.800013934Z" level=info msg="runtime interface starting up..."
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.800020826Z" level=info msg="starting plugins..."
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.8000332Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:23:51 functional-525396 crio[846]: time="2025-12-08T00:23:51.800093902Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:23:51 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.739422025Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5e8a76f6-0e21-4743-8889-7ec2565c37b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.740274432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4c6f4347-3abb-4c31-a73b-68739d64c3d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.740825699Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b3e59d47-d629-4292-9b50-6c564b500599 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.741355189Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=44515e1d-9446-4368-890d-d2de7c86eaeb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.741869041Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=ea15b8e0-07e8-4306-98d2-79a2d3f372b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.742462622Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59023895-7f55-4fc7-8226-4d7441bf49a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:23:53 functional-525396 crio[846]: time="2025-12-08T00:23:53.743209149Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d7ee4146-b9c2-40d6-aace-eaa7da664e2c name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.409954979Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e5862632-e0ef-4f65-a386-acdba320292e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.410734194Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=970e99e8-351c-44c1-8d9c-2ec920b854a8 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.411550759Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2ccfa14d-e5c2-4974-a733-61e2c8a30035 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.412012695Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0ea32a96-9293-4ba3-9965-1734ce3c734f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.412461388Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=17ef04a8-e891-416c-ab92-e6fab60078b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.412953331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ef545881-cb07-4a1a-88f4-a6d167916903 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:27:59 functional-525396 crio[846]: time="2025-12-08T00:27:59.413433557Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=541cc5ce-8940-46f0-b49f-f2839da44d0a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:32:02.233230    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:02.234148    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:02.235826    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:02.236465    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:32:02.238074    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 23:24] overlayfs: idmapped layers are currently not supported
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:32:02 up  5:14,  0 user,  load average: 0.09, 0.48, 0.94
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:31:59 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:32:00 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 08 00:32:00 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:00 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:00 functional-525396 kubelet[4782]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:00 functional-525396 kubelet[4782]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:00 functional-525396 kubelet[4782]: E1208 00:32:00.403324    4782 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:32:00 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:32:00 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:32:01 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 08 00:32:01 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:01 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:01 functional-525396 kubelet[4870]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:01 functional-525396 kubelet[4870]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:01 functional-525396 kubelet[4870]: E1208 00:32:01.320370    4870 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:32:01 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:32:01 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:32:02 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 08 00:32:02 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:02 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:32:02 functional-525396 kubelet[4932]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:02 functional-525396 kubelet[4932]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:32:02 functional-525396 kubelet[4932]: E1208 00:32:02.067512    4932 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:32:02 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:32:02 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 6 (331.008126ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 00:32:02.685594  826258 status.go:458] kubeconfig endpoint: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1208 00:32:02.701956  791807 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --alsologtostderr -v=8
E1208 00:32:46.335443  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:33:14.039733  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:35:34.379687  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:36:57.452637  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:37:46.335541  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-525396 --alsologtostderr -v=8: exit status 80 (6m5.576589836s)

                                                
                                                
-- stdout --
	* [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:32:02.748489  826329 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:32:02.748673  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748687  826329 out.go:374] Setting ErrFile to fd 2...
	I1208 00:32:02.748692  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748975  826329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:32:02.749379  826329 out.go:368] Setting JSON to false
	I1208 00:32:02.750240  826329 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18855,"bootTime":1765135068,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:32:02.750321  826329 start.go:143] virtualization:  
	I1208 00:32:02.755521  826329 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:32:02.759227  826329 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:32:02.759498  826329 notify.go:221] Checking for updates...
	I1208 00:32:02.765171  826329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:32:02.768668  826329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:02.771686  826329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:32:02.774728  826329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:32:02.777727  826329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:32:02.781794  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:02.781971  826329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:32:02.823053  826329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:32:02.823186  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.879429  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.869702269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.879546  826329 docker.go:319] overlay module found
	I1208 00:32:02.884410  826329 out.go:179] * Using the docker driver based on existing profile
	I1208 00:32:02.887311  826329 start.go:309] selected driver: docker
	I1208 00:32:02.887330  826329 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.887447  826329 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:32:02.887565  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.942385  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.932846048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.942810  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:02.942902  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:02.942960  826329 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.948301  826329 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:32:02.951106  826329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:32:02.954049  826329 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:32:02.956917  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:02.956968  826329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:32:02.956999  826329 cache.go:65] Caching tarball of preloaded images
	I1208 00:32:02.957004  826329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:32:02.957092  826329 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:32:02.957103  826329 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:32:02.957210  826329 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:32:02.976499  826329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:32:02.976524  826329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:32:02.976543  826329 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:32:02.976579  826329 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:32:02.976652  826329 start.go:364] duration metric: took 48.116µs to acquireMachinesLock for "functional-525396"
	I1208 00:32:02.976674  826329 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:32:02.976683  826329 fix.go:54] fixHost starting: 
	I1208 00:32:02.976940  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:02.996203  826329 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:32:02.996234  826329 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:32:02.999434  826329 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:32:02.999477  826329 machine.go:94] provisionDockerMachine start ...
	I1208 00:32:02.999559  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.021375  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.021746  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.021762  826329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:32:03.174523  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.174550  826329 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:32:03.174616  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.192743  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.193067  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.193084  826329 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:32:03.356577  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.356704  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.375055  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.375394  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.375419  826329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:32:03.529767  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:32:03.529793  826329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:32:03.529822  826329 ubuntu.go:190] setting up certificates
	I1208 00:32:03.529839  826329 provision.go:84] configureAuth start
	I1208 00:32:03.529901  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:03.552219  826329 provision.go:143] copyHostCerts
	I1208 00:32:03.552258  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552298  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:32:03.552310  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552383  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:32:03.552464  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552480  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:32:03.552484  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552511  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:32:03.552550  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552566  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:32:03.552570  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552592  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:32:03.552642  826329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:32:03.707027  826329 provision.go:177] copyRemoteCerts
	I1208 00:32:03.707105  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:32:03.707150  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.724035  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:03.830514  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:32:03.830586  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:32:03.848126  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:32:03.848238  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:32:03.865293  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:32:03.865368  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:32:03.882781  826329 provision.go:87] duration metric: took 352.917637ms to configureAuth
	I1208 00:32:03.882808  826329 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:32:03.883086  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:03.883204  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.900405  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.900722  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.900745  826329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:32:04.247102  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:32:04.247132  826329 machine.go:97] duration metric: took 1.247646186s to provisionDockerMachine
	I1208 00:32:04.247143  826329 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:32:04.247156  826329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:32:04.247233  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:32:04.247291  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.269420  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.374672  826329 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:32:04.377926  826329 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:32:04.377948  826329 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:32:04.377953  826329 command_runner.go:130] > VERSION_ID="12"
	I1208 00:32:04.377958  826329 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:32:04.377964  826329 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:32:04.377968  826329 command_runner.go:130] > ID=debian
	I1208 00:32:04.377973  826329 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:32:04.377998  826329 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:32:04.378009  826329 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:32:04.378363  826329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:32:04.378386  826329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:32:04.378397  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:32:04.378453  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:32:04.378535  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:32:04.378546  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 00:32:04.378621  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:32:04.378628  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> /etc/test/nested/copy/791807/hosts
	I1208 00:32:04.378672  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:32:04.386632  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:04.404202  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:32:04.421545  826329 start.go:296] duration metric: took 174.385446ms for postStartSetup
	I1208 00:32:04.421649  826329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:32:04.421695  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.439941  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.543929  826329 command_runner.go:130] > 13%
	I1208 00:32:04.544005  826329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:32:04.548692  826329 command_runner.go:130] > 169G
	I1208 00:32:04.548719  826329 fix.go:56] duration metric: took 1.572034198s for fixHost
	I1208 00:32:04.548730  826329 start.go:83] releasing machines lock for "functional-525396", held for 1.572067364s
	I1208 00:32:04.548856  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:04.565574  826329 ssh_runner.go:195] Run: cat /version.json
	I1208 00:32:04.565638  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.565923  826329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:32:04.565984  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.584847  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.600519  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.771794  826329 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:32:04.774495  826329 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:32:04.774657  826329 ssh_runner.go:195] Run: systemctl --version
	I1208 00:32:04.780874  826329 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:32:04.780917  826329 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:32:04.781367  826329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:32:04.818112  826329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:32:04.822491  826329 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:32:04.822532  826329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:32:04.822595  826329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:32:04.830492  826329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:32:04.830518  826329 start.go:496] detecting cgroup driver to use...
	I1208 00:32:04.830579  826329 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:32:04.830661  826329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:32:04.846467  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:32:04.859999  826329 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:32:04.860093  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:32:04.876040  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:32:04.889316  826329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:32:04.999380  826329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:32:05.135529  826329 docker.go:234] disabling docker service ...
	I1208 00:32:05.135652  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:32:05.150887  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:32:05.164082  826329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:32:05.274195  826329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:32:05.386139  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:32:05.399321  826329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:32:05.411741  826329 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 00:32:05.412925  826329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:32:05.413007  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.421375  826329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:32:05.421462  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.430145  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.438751  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.447666  826329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:32:05.455572  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.464290  826329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.472537  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.481189  826329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:32:05.487727  826329 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:32:05.488614  826329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:32:05.496261  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:05.603146  826329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:32:05.769023  826329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:32:05.769169  826329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:32:05.773391  826329 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 00:32:05.773452  826329 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:32:05.773473  826329 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1208 00:32:05.773494  826329 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:05.773524  826329 command_runner.go:130] > Access: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773553  826329 command_runner.go:130] > Modify: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773581  826329 command_runner.go:130] > Change: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773598  826329 command_runner.go:130] >  Birth: -
	I1208 00:32:05.774292  826329 start.go:564] Will wait 60s for crictl version
	I1208 00:32:05.774387  826329 ssh_runner.go:195] Run: which crictl
	I1208 00:32:05.778688  826329 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:32:05.779547  826329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:32:05.803509  826329 command_runner.go:130] > Version:  0.1.0
	I1208 00:32:05.803790  826329 command_runner.go:130] > RuntimeName:  cri-o
	I1208 00:32:05.804036  826329 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1208 00:32:05.804294  826329 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:32:05.806608  826329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:32:05.806739  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.840244  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.840321  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.840340  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.840361  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.840391  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.840415  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.840434  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.840452  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.840471  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.840498  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.840519  826329 command_runner.go:130] >      static
	I1208 00:32:05.840536  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.840553  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.840567  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.840593  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.840612  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.840629  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.840647  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.840664  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.840690  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.841800  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.872333  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.872357  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.872369  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.872376  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.872381  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.872385  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.872389  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.872395  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.872399  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.872408  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.872412  826329 command_runner.go:130] >      static
	I1208 00:32:05.872422  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.872437  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.872444  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.872448  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.872451  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.872457  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.872463  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.872467  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.872480  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.877414  826329 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:32:05.880269  826329 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:32:05.896780  826329 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:32:05.900764  826329 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:32:05.900873  826329 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:32:05.900985  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:05.901051  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.935654  826329 command_runner.go:130] > {
	I1208 00:32:05.935679  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.935684  826329 command_runner.go:130] >     {
	I1208 00:32:05.935694  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.935699  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935705  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.935708  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935713  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935724  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.935736  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.935743  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935756  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.935763  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935768  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935772  826329 command_runner.go:130] >     },
	I1208 00:32:05.935775  826329 command_runner.go:130] >     {
	I1208 00:32:05.935781  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.935787  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935793  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.935796  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935800  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935810  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.935821  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.935825  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935829  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.935836  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935845  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935853  826329 command_runner.go:130] >     },
	I1208 00:32:05.935857  826329 command_runner.go:130] >     {
	I1208 00:32:05.935864  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.935870  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935876  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.935879  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935885  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935894  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.935905  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.935908  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935912  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.935917  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.935923  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935927  826329 command_runner.go:130] >     },
	I1208 00:32:05.935932  826329 command_runner.go:130] >     {
	I1208 00:32:05.935938  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.935946  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935956  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.935962  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935967  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935975  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.935986  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.935990  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935994  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.936001  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936006  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936011  826329 command_runner.go:130] >       },
	I1208 00:32:05.936021  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936028  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936031  826329 command_runner.go:130] >     },
	I1208 00:32:05.936034  826329 command_runner.go:130] >     {
	I1208 00:32:05.936041  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.936048  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936053  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.936057  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936063  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936072  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.936083  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.936087  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936091  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.936095  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936101  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936105  826329 command_runner.go:130] >       },
	I1208 00:32:05.936110  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936116  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936119  826329 command_runner.go:130] >     },
	I1208 00:32:05.936122  826329 command_runner.go:130] >     {
	I1208 00:32:05.936129  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.936136  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936143  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.936152  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936160  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936169  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.936179  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.936184  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936189  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.936195  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936199  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936203  826329 command_runner.go:130] >       },
	I1208 00:32:05.936207  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936215  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936219  826329 command_runner.go:130] >     },
	I1208 00:32:05.936222  826329 command_runner.go:130] >     {
	I1208 00:32:05.936228  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.936235  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936240  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.936244  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936255  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936263  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.936271  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.936277  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936282  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.936288  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936292  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936295  826329 command_runner.go:130] >     },
	I1208 00:32:05.936298  826329 command_runner.go:130] >     {
	I1208 00:32:05.936306  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.936313  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936318  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.936322  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936326  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936336  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.936362  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.936372  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936377  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.936387  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936391  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936395  826329 command_runner.go:130] >       },
	I1208 00:32:05.936406  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936410  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936414  826329 command_runner.go:130] >     },
	I1208 00:32:05.936417  826329 command_runner.go:130] >     {
	I1208 00:32:05.936424  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.936432  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936437  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.936441  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936445  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936455  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.936465  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.936469  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936473  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.936483  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936487  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.936490  826329 command_runner.go:130] >       },
	I1208 00:32:05.936500  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936504  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.936507  826329 command_runner.go:130] >     }
	I1208 00:32:05.936510  826329 command_runner.go:130] >   ]
	I1208 00:32:05.936513  826329 command_runner.go:130] > }
	I1208 00:32:05.936690  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.936705  826329 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:32:05.936757  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.965491  826329 command_runner.go:130] > {
	I1208 00:32:05.965510  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.965515  826329 command_runner.go:130] >     {
	I1208 00:32:05.965525  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.965542  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965549  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.965553  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965557  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965584  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.965593  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.965596  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965600  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.965604  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965614  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965618  826329 command_runner.go:130] >     },
	I1208 00:32:05.965620  826329 command_runner.go:130] >     {
	I1208 00:32:05.965627  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.965630  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965635  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.965639  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965642  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965650  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.965659  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.965662  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965666  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.965669  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965675  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965679  826329 command_runner.go:130] >     },
	I1208 00:32:05.965682  826329 command_runner.go:130] >     {
	I1208 00:32:05.965689  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.965692  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965700  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.965704  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965708  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965715  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.965723  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.965726  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965733  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.965738  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.965741  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965744  826329 command_runner.go:130] >     },
	I1208 00:32:05.965747  826329 command_runner.go:130] >     {
	I1208 00:32:05.965754  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.965758  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965763  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.965768  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965772  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965779  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.965786  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.965789  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965793  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.965796  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965800  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965803  826329 command_runner.go:130] >       },
	I1208 00:32:05.965811  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965815  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965818  826329 command_runner.go:130] >     },
	I1208 00:32:05.965821  826329 command_runner.go:130] >     {
	I1208 00:32:05.965827  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.965831  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965841  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.965844  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965848  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965859  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.965867  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.965870  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965874  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.965877  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965881  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965884  826329 command_runner.go:130] >       },
	I1208 00:32:05.965891  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965895  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965898  826329 command_runner.go:130] >     },
	I1208 00:32:05.965901  826329 command_runner.go:130] >     {
	I1208 00:32:05.965907  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.965911  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965917  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.965920  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965924  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965932  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.965944  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.965947  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965951  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.965954  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965958  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965961  826329 command_runner.go:130] >       },
	I1208 00:32:05.965964  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965968  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965971  826329 command_runner.go:130] >     },
	I1208 00:32:05.965974  826329 command_runner.go:130] >     {
	I1208 00:32:05.965980  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.965984  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965989  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.965992  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965995  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966003  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.966013  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.966016  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966020  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.966023  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966027  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966030  826329 command_runner.go:130] >     },
	I1208 00:32:05.966033  826329 command_runner.go:130] >     {
	I1208 00:32:05.966042  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.966046  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966051  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.966054  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966058  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966066  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.966082  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.966086  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966090  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.966094  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966097  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.966100  826329 command_runner.go:130] >       },
	I1208 00:32:05.966104  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966109  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966112  826329 command_runner.go:130] >     },
	I1208 00:32:05.966117  826329 command_runner.go:130] >     {
	I1208 00:32:05.966124  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.966127  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966131  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.966136  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966140  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966149  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.966156  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.966160  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966163  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.966167  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966171  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.966173  826329 command_runner.go:130] >       },
	I1208 00:32:05.966177  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966180  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.966183  826329 command_runner.go:130] >     }
	I1208 00:32:05.966186  826329 command_runner.go:130] >   ]
	I1208 00:32:05.966189  826329 command_runner.go:130] > }
	I1208 00:32:05.968541  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.968564  826329 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:32:05.968572  826329 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:32:05.968676  826329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:32:05.968759  826329 ssh_runner.go:195] Run: crio config
	I1208 00:32:06.017314  826329 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 00:32:06.017338  826329 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 00:32:06.017347  826329 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 00:32:06.017350  826329 command_runner.go:130] > #
	I1208 00:32:06.017357  826329 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 00:32:06.017363  826329 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 00:32:06.017370  826329 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 00:32:06.017378  826329 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 00:32:06.017384  826329 command_runner.go:130] > # reload'.
	I1208 00:32:06.017391  826329 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 00:32:06.017404  826329 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 00:32:06.017411  826329 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 00:32:06.017417  826329 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 00:32:06.017423  826329 command_runner.go:130] > [crio]
	I1208 00:32:06.017429  826329 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 00:32:06.017434  826329 command_runner.go:130] > # containers images, in this directory.
	I1208 00:32:06.017704  826329 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 00:32:06.017722  826329 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 00:32:06.017729  826329 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1208 00:32:06.017738  826329 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1208 00:32:06.017898  826329 command_runner.go:130] > # imagestore = ""
	I1208 00:32:06.017914  826329 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 00:32:06.017922  826329 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 00:32:06.018164  826329 command_runner.go:130] > # storage_driver = "overlay"
	I1208 00:32:06.018180  826329 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 00:32:06.018187  826329 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 00:32:06.018278  826329 command_runner.go:130] > # storage_option = [
	I1208 00:32:06.018455  826329 command_runner.go:130] > # ]
	I1208 00:32:06.018487  826329 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 00:32:06.018500  826329 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 00:32:06.018675  826329 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 00:32:06.018694  826329 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 00:32:06.018706  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 00:32:06.018719  826329 command_runner.go:130] > # always happen on a node reboot
	I1208 00:32:06.018990  826329 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 00:32:06.019024  826329 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 00:32:06.019035  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 00:32:06.019041  826329 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 00:32:06.019224  826329 command_runner.go:130] > # version_file_persist = ""
	I1208 00:32:06.019243  826329 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 00:32:06.019258  826329 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 00:32:06.019484  826329 command_runner.go:130] > # internal_wipe = true
	I1208 00:32:06.019500  826329 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1208 00:32:06.019507  826329 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1208 00:32:06.019754  826329 command_runner.go:130] > # internal_repair = true
	I1208 00:32:06.019769  826329 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 00:32:06.019785  826329 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 00:32:06.019793  826329 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 00:32:06.020120  826329 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 00:32:06.020138  826329 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 00:32:06.020143  826329 command_runner.go:130] > [crio.api]
	I1208 00:32:06.020148  826329 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 00:32:06.020346  826329 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 00:32:06.020366  826329 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 00:32:06.020581  826329 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 00:32:06.020605  826329 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 00:32:06.020611  826329 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 00:32:06.020863  826329 command_runner.go:130] > # stream_port = "0"
	I1208 00:32:06.020878  826329 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 00:32:06.021158  826329 command_runner.go:130] > # stream_enable_tls = false
	I1208 00:32:06.021176  826329 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 00:32:06.021352  826329 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 00:32:06.021367  826329 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 00:32:06.021380  826329 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021617  826329 command_runner.go:130] > # stream_tls_cert = ""
	I1208 00:32:06.021634  826329 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 00:32:06.021641  826329 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021794  826329 command_runner.go:130] > # stream_tls_key = ""
	I1208 00:32:06.021808  826329 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 00:32:06.021824  826329 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 00:32:06.021840  826329 command_runner.go:130] > # automatically pick up the changes.
	I1208 00:32:06.022038  826329 command_runner.go:130] > # stream_tls_ca = ""
	I1208 00:32:06.022075  826329 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022282  826329 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 00:32:06.022297  826329 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022560  826329 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 00:32:06.022581  826329 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 00:32:06.022589  826329 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 00:32:06.022596  826329 command_runner.go:130] > [crio.runtime]
	I1208 00:32:06.022603  826329 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 00:32:06.022613  826329 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 00:32:06.022618  826329 command_runner.go:130] > # "nofile=1024:2048"
	I1208 00:32:06.022627  826329 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 00:32:06.022736  826329 command_runner.go:130] > # default_ulimits = [
	I1208 00:32:06.022966  826329 command_runner.go:130] > # ]
	I1208 00:32:06.022982  826329 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 00:32:06.023192  826329 command_runner.go:130] > # no_pivot = false
	I1208 00:32:06.023203  826329 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 00:32:06.023210  826329 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 00:32:06.023435  826329 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 00:32:06.023449  826329 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 00:32:06.023455  826329 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 00:32:06.023463  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023655  826329 command_runner.go:130] > # conmon = ""
	I1208 00:32:06.023668  826329 command_runner.go:130] > # Cgroup setting for conmon
	I1208 00:32:06.023697  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 00:32:06.023812  826329 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 00:32:06.023826  826329 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 00:32:06.023831  826329 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 00:32:06.023839  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023982  826329 command_runner.go:130] > # conmon_env = [
	I1208 00:32:06.024123  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024147  826329 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 00:32:06.024153  826329 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 00:32:06.024161  826329 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 00:32:06.024313  826329 command_runner.go:130] > # default_env = [
	I1208 00:32:06.024407  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024424  826329 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 00:32:06.024439  826329 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1208 00:32:06.024689  826329 command_runner.go:130] > # selinux = false
	I1208 00:32:06.024713  826329 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 00:32:06.024722  826329 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1208 00:32:06.024727  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.024963  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.024977  826329 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1208 00:32:06.024983  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025171  826329 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1208 00:32:06.025185  826329 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 00:32:06.025199  826329 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 00:32:06.025214  826329 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 00:32:06.025222  826329 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 00:32:06.025227  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025459  826329 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 00:32:06.025474  826329 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 00:32:06.025479  826329 command_runner.go:130] > # the cgroup blockio controller.
	I1208 00:32:06.025701  826329 command_runner.go:130] > # blockio_config_file = ""
	I1208 00:32:06.025716  826329 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1208 00:32:06.025721  826329 command_runner.go:130] > # blockio parameters.
	I1208 00:32:06.025998  826329 command_runner.go:130] > # blockio_reload = false
	I1208 00:32:06.026018  826329 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 00:32:06.026025  826329 command_runner.go:130] > # irqbalance daemon.
	I1208 00:32:06.026221  826329 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 00:32:06.026241  826329 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1208 00:32:06.026249  826329 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1208 00:32:06.026257  826329 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1208 00:32:06.026494  826329 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1208 00:32:06.026510  826329 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 00:32:06.026517  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.026722  826329 command_runner.go:130] > # rdt_config_file = ""
	I1208 00:32:06.026753  826329 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 00:32:06.026902  826329 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 00:32:06.026919  826329 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 00:32:06.027125  826329 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 00:32:06.027138  826329 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 00:32:06.027163  826329 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 00:32:06.027177  826329 command_runner.go:130] > # will be added.
	I1208 00:32:06.027277  826329 command_runner.go:130] > # default_capabilities = [
	I1208 00:32:06.027581  826329 command_runner.go:130] > # 	"CHOWN",
	I1208 00:32:06.027682  826329 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 00:32:06.027912  826329 command_runner.go:130] > # 	"FSETID",
	I1208 00:32:06.028073  826329 command_runner.go:130] > # 	"FOWNER",
	I1208 00:32:06.028166  826329 command_runner.go:130] > # 	"SETGID",
	I1208 00:32:06.028351  826329 command_runner.go:130] > # 	"SETUID",
	I1208 00:32:06.028526  826329 command_runner.go:130] > # 	"SETPCAP",
	I1208 00:32:06.028680  826329 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 00:32:06.028802  826329 command_runner.go:130] > # 	"KILL",
	I1208 00:32:06.028996  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029019  826329 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 00:32:06.029028  826329 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 00:32:06.029301  826329 command_runner.go:130] > # add_inheritable_capabilities = false
	I1208 00:32:06.029326  826329 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 00:32:06.029333  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029338  826329 command_runner.go:130] > default_sysctls = [
	I1208 00:32:06.029464  826329 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1208 00:32:06.029477  826329 command_runner.go:130] > ]
	I1208 00:32:06.029483  826329 command_runner.go:130] > # List of devices on the host that a
	I1208 00:32:06.029491  826329 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 00:32:06.029495  826329 command_runner.go:130] > # allowed_devices = [
	I1208 00:32:06.029499  826329 command_runner.go:130] > # 	"/dev/fuse",
	I1208 00:32:06.029507  826329 command_runner.go:130] > # 	"/dev/net/tun",
	I1208 00:32:06.029726  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029756  826329 command_runner.go:130] > # List of additional devices. specified as
	I1208 00:32:06.029769  826329 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 00:32:06.029775  826329 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 00:32:06.029782  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029898  826329 command_runner.go:130] > # additional_devices = [
	I1208 00:32:06.029911  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029918  826329 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 00:32:06.029922  826329 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 00:32:06.030014  826329 command_runner.go:130] > # 	"/etc/cdi",
	I1208 00:32:06.030033  826329 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 00:32:06.030037  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030045  826329 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 00:32:06.030051  826329 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 00:32:06.030058  826329 command_runner.go:130] > # Defaults to false.
	I1208 00:32:06.030179  826329 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 00:32:06.030194  826329 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 00:32:06.030201  826329 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 00:32:06.030206  826329 command_runner.go:130] > # hooks_dir = [
	I1208 00:32:06.030462  826329 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 00:32:06.030539  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030554  826329 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 00:32:06.030561  826329 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 00:32:06.030592  826329 command_runner.go:130] > # its default mounts from the following two files:
	I1208 00:32:06.030598  826329 command_runner.go:130] > #
	I1208 00:32:06.030608  826329 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 00:32:06.030631  826329 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 00:32:06.030642  826329 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 00:32:06.030646  826329 command_runner.go:130] > #
	I1208 00:32:06.030658  826329 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 00:32:06.030668  826329 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 00:32:06.030675  826329 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 00:32:06.030680  826329 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 00:32:06.030684  826329 command_runner.go:130] > #
	I1208 00:32:06.030688  826329 command_runner.go:130] > # default_mounts_file = ""
	I1208 00:32:06.030697  826329 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 00:32:06.030710  826329 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 00:32:06.030795  826329 command_runner.go:130] > # pids_limit = -1
	I1208 00:32:06.030811  826329 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 00:32:06.030858  826329 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 00:32:06.030867  826329 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 00:32:06.030881  826329 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 00:32:06.030886  826329 command_runner.go:130] > # log_size_max = -1
	I1208 00:32:06.030903  826329 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1208 00:32:06.031086  826329 command_runner.go:130] > # log_to_journald = false
	I1208 00:32:06.031102  826329 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 00:32:06.031167  826329 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 00:32:06.031181  826329 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 00:32:06.031241  826329 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 00:32:06.031258  826329 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 00:32:06.031327  826329 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 00:32:06.031335  826329 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 00:32:06.031339  826329 command_runner.go:130] > # read_only = false
	I1208 00:32:06.031345  826329 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 00:32:06.031377  826329 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 00:32:06.031383  826329 command_runner.go:130] > # live configuration reload.
	I1208 00:32:06.031388  826329 command_runner.go:130] > # log_level = "info"
	I1208 00:32:06.031397  826329 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 00:32:06.031408  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.031412  826329 command_runner.go:130] > # log_filter = ""
	I1208 00:32:06.031419  826329 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031430  826329 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 00:32:06.031434  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031452  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031456  826329 command_runner.go:130] > # uid_mappings = ""
	I1208 00:32:06.031462  826329 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031468  826329 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 00:32:06.031472  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031482  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031553  826329 command_runner.go:130] > # gid_mappings = ""
	I1208 00:32:06.031569  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 00:32:06.031632  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031648  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031656  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031742  826329 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 00:32:06.031759  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 00:32:06.031785  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031798  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031807  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.032017  826329 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 00:32:06.032056  826329 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 00:32:06.032071  826329 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 00:32:06.032077  826329 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 00:32:06.032099  826329 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 00:32:06.032106  826329 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 00:32:06.032112  826329 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 00:32:06.032205  826329 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 00:32:06.032267  826329 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 00:32:06.032278  826329 command_runner.go:130] > # drop_infra_ctr = true
	I1208 00:32:06.032285  826329 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 00:32:06.032292  826329 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 00:32:06.032307  826329 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 00:32:06.032340  826329 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 00:32:06.032356  826329 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1208 00:32:06.032371  826329 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1208 00:32:06.032378  826329 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1208 00:32:06.032384  826329 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1208 00:32:06.032394  826329 command_runner.go:130] > # shared_cpuset = ""
	I1208 00:32:06.032400  826329 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 00:32:06.032411  826329 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 00:32:06.032448  826329 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 00:32:06.032463  826329 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 00:32:06.032467  826329 command_runner.go:130] > # pinns_path = ""
	I1208 00:32:06.032473  826329 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1208 00:32:06.032479  826329 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1208 00:32:06.032487  826329 command_runner.go:130] > # enable_criu_support = true
	I1208 00:32:06.032493  826329 command_runner.go:130] > # Enable/disable the generation of the container,
	I1208 00:32:06.032500  826329 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1208 00:32:06.032732  826329 command_runner.go:130] > # enable_pod_events = false
	I1208 00:32:06.032748  826329 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 00:32:06.032827  826329 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1208 00:32:06.032846  826329 command_runner.go:130] > # default_runtime = "crun"
	I1208 00:32:06.032871  826329 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 00:32:06.032889  826329 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 00:32:06.032901  826329 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1208 00:32:06.032911  826329 command_runner.go:130] > # creation as a file is not desired either.
	I1208 00:32:06.032919  826329 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 00:32:06.032929  826329 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 00:32:06.032938  826329 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 00:32:06.032974  826329 command_runner.go:130] > # ]
	I1208 00:32:06.033041  826329 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 00:32:06.033057  826329 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 00:32:06.033064  826329 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1208 00:32:06.033070  826329 command_runner.go:130] > # Each entry in the table should follow the format:
	I1208 00:32:06.033073  826329 command_runner.go:130] > #
	I1208 00:32:06.033106  826329 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1208 00:32:06.033112  826329 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1208 00:32:06.033117  826329 command_runner.go:130] > # runtime_type = "oci"
	I1208 00:32:06.033192  826329 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1208 00:32:06.033209  826329 command_runner.go:130] > # inherit_default_runtime = false
	I1208 00:32:06.033214  826329 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1208 00:32:06.033219  826329 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1208 00:32:06.033225  826329 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1208 00:32:06.033228  826329 command_runner.go:130] > # monitor_env = []
	I1208 00:32:06.033233  826329 command_runner.go:130] > # privileged_without_host_devices = false
	I1208 00:32:06.033237  826329 command_runner.go:130] > # allowed_annotations = []
	I1208 00:32:06.033263  826329 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1208 00:32:06.033276  826329 command_runner.go:130] > # no_sync_log = false
	I1208 00:32:06.033282  826329 command_runner.go:130] > # default_annotations = {}
	I1208 00:32:06.033376  826329 command_runner.go:130] > # stream_websockets = false
	I1208 00:32:06.033384  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.033433  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.033444  826329 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1208 00:32:06.033456  826329 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1208 00:32:06.033467  826329 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 00:32:06.033474  826329 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 00:32:06.033477  826329 command_runner.go:130] > #   in $PATH.
	I1208 00:32:06.033483  826329 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1208 00:32:06.033489  826329 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 00:32:06.033495  826329 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1208 00:32:06.033504  826329 command_runner.go:130] > #   state.
	I1208 00:32:06.033518  826329 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 00:32:06.033528  826329 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 00:32:06.033535  826329 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1208 00:32:06.033547  826329 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1208 00:32:06.033552  826329 command_runner.go:130] > #   the values from the default runtime on load time.
	I1208 00:32:06.033558  826329 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 00:32:06.033563  826329 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 00:32:06.033604  826329 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 00:32:06.033610  826329 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 00:32:06.033615  826329 command_runner.go:130] > #   The currently recognized values are:
	I1208 00:32:06.033697  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 00:32:06.033736  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 00:32:06.033745  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 00:32:06.033760  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 00:32:06.033770  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 00:32:06.033787  826329 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 00:32:06.033799  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1208 00:32:06.033811  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1208 00:32:06.033818  826329 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 00:32:06.033824  826329 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1208 00:32:06.033832  826329 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1208 00:32:06.033842  826329 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1208 00:32:06.033851  826329 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1208 00:32:06.033863  826329 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1208 00:32:06.033869  826329 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1208 00:32:06.033883  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1208 00:32:06.033892  826329 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1208 00:32:06.033896  826329 command_runner.go:130] > #   deprecated option "conmon".
	I1208 00:32:06.033903  826329 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1208 00:32:06.033908  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1208 00:32:06.033916  826329 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1208 00:32:06.033925  826329 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 00:32:06.033933  826329 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1208 00:32:06.033944  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1208 00:32:06.033955  826329 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1208 00:32:06.033959  826329 command_runner.go:130] > #   conmon-rs by using:
	I1208 00:32:06.033976  826329 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1208 00:32:06.033990  826329 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1208 00:32:06.033998  826329 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1208 00:32:06.034005  826329 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1208 00:32:06.034012  826329 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1208 00:32:06.034036  826329 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1208 00:32:06.034044  826329 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1208 00:32:06.034064  826329 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1208 00:32:06.034074  826329 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1208 00:32:06.034087  826329 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1208 00:32:06.034557  826329 command_runner.go:130] > #   when a machine crash happens.
	I1208 00:32:06.034567  826329 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1208 00:32:06.034582  826329 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1208 00:32:06.034589  826329 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1208 00:32:06.034594  826329 command_runner.go:130] > #   seccomp profile for the runtime.
	I1208 00:32:06.034680  826329 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1208 00:32:06.034713  826329 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1208 00:32:06.034720  826329 command_runner.go:130] > #
	I1208 00:32:06.034732  826329 command_runner.go:130] > # Using the seccomp notifier feature:
	I1208 00:32:06.034735  826329 command_runner.go:130] > #
	I1208 00:32:06.034742  826329 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1208 00:32:06.034749  826329 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1208 00:32:06.034762  826329 command_runner.go:130] > #
	I1208 00:32:06.034769  826329 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1208 00:32:06.034785  826329 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1208 00:32:06.034788  826329 command_runner.go:130] > #
	I1208 00:32:06.034795  826329 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1208 00:32:06.034799  826329 command_runner.go:130] > # feature.
	I1208 00:32:06.034802  826329 command_runner.go:130] > #
	I1208 00:32:06.034808  826329 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1208 00:32:06.034819  826329 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1208 00:32:06.034825  826329 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1208 00:32:06.034837  826329 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1208 00:32:06.034858  826329 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1208 00:32:06.034861  826329 command_runner.go:130] > #
	I1208 00:32:06.034867  826329 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1208 00:32:06.034878  826329 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1208 00:32:06.034881  826329 command_runner.go:130] > #
	I1208 00:32:06.034887  826329 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1208 00:32:06.034897  826329 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1208 00:32:06.034900  826329 command_runner.go:130] > #
	I1208 00:32:06.034906  826329 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1208 00:32:06.034916  826329 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1208 00:32:06.034920  826329 command_runner.go:130] > # limitation.
	I1208 00:32:06.034927  826329 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1208 00:32:06.034932  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1208 00:32:06.034939  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.034944  826329 command_runner.go:130] > runtime_root = "/run/crun"
	I1208 00:32:06.034954  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.034958  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.034962  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.034972  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.034976  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.034981  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.034990  826329 command_runner.go:130] > allowed_annotations = [
	I1208 00:32:06.034999  826329 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1208 00:32:06.035002  826329 command_runner.go:130] > ]
	I1208 00:32:06.035007  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035011  826329 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 00:32:06.035016  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1208 00:32:06.035020  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.035024  826329 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 00:32:06.035034  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.035038  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.035042  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.035046  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.035050  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.035054  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.035145  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035184  826329 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 00:32:06.035191  826329 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 00:32:06.035197  826329 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 00:32:06.035205  826329 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 00:32:06.035222  826329 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1208 00:32:06.035233  826329 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1208 00:32:06.035249  826329 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1208 00:32:06.035255  826329 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 00:32:06.035265  826329 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 00:32:06.035274  826329 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 00:32:06.035280  826329 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 00:32:06.035291  826329 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 00:32:06.035294  826329 command_runner.go:130] > # Example:
	I1208 00:32:06.035299  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 00:32:06.035309  826329 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 00:32:06.035318  826329 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 00:32:06.035324  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 00:32:06.035413  826329 command_runner.go:130] > # cpuset = "0-1"
	I1208 00:32:06.035447  826329 command_runner.go:130] > # cpushares = "5"
	I1208 00:32:06.035460  826329 command_runner.go:130] > # cpuquota = "1000"
	I1208 00:32:06.035471  826329 command_runner.go:130] > # cpuperiod = "100000"
	I1208 00:32:06.035475  826329 command_runner.go:130] > # cpulimit = "35"
	I1208 00:32:06.035479  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.035483  826329 command_runner.go:130] > # The workload name is workload-type.
	I1208 00:32:06.035497  826329 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 00:32:06.035502  826329 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 00:32:06.035540  826329 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 00:32:06.035556  826329 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 00:32:06.035563  826329 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 00:32:06.035576  826329 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1208 00:32:06.035584  826329 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1208 00:32:06.035592  826329 command_runner.go:130] > # Default value is set to true
	I1208 00:32:06.035597  826329 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1208 00:32:06.035603  826329 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1208 00:32:06.035607  826329 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1208 00:32:06.035703  826329 command_runner.go:130] > # Default value is set to 'false'
	I1208 00:32:06.035729  826329 command_runner.go:130] > # disable_hostport_mapping = false
	I1208 00:32:06.035736  826329 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1208 00:32:06.035751  826329 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1208 00:32:06.035755  826329 command_runner.go:130] > # timezone = ""
	I1208 00:32:06.035762  826329 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 00:32:06.035769  826329 command_runner.go:130] > #
	I1208 00:32:06.035775  826329 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 00:32:06.035782  826329 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1208 00:32:06.035785  826329 command_runner.go:130] > [crio.image]
	I1208 00:32:06.035791  826329 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 00:32:06.035796  826329 command_runner.go:130] > # default_transport = "docker://"
	I1208 00:32:06.035802  826329 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 00:32:06.035813  826329 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035818  826329 command_runner.go:130] > # global_auth_file = ""
	I1208 00:32:06.035823  826329 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 00:32:06.035833  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035852  826329 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1208 00:32:06.035863  826329 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 00:32:06.035874  826329 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035950  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035964  826329 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 00:32:06.035972  826329 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 00:32:06.035989  826329 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 00:32:06.035998  826329 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 00:32:06.036009  826329 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 00:32:06.036013  826329 command_runner.go:130] > # pause_command = "/pause"
	I1208 00:32:06.036019  826329 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1208 00:32:06.036030  826329 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1208 00:32:06.036036  826329 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1208 00:32:06.036043  826329 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1208 00:32:06.036052  826329 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1208 00:32:06.036058  826329 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1208 00:32:06.036062  826329 command_runner.go:130] > # pinned_images = [
	I1208 00:32:06.036065  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036071  826329 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 00:32:06.036077  826329 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 00:32:06.036087  826329 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 00:32:06.036093  826329 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 00:32:06.036104  826329 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 00:32:06.036109  826329 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1208 00:32:06.036115  826329 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1208 00:32:06.036126  826329 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1208 00:32:06.036133  826329 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1208 00:32:06.036139  826329 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1208 00:32:06.036145  826329 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1208 00:32:06.036150  826329 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1208 00:32:06.036160  826329 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 00:32:06.036167  826329 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 00:32:06.036172  826329 command_runner.go:130] > # changing them here.
	I1208 00:32:06.036184  826329 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1208 00:32:06.036193  826329 command_runner.go:130] > # insecure_registries = [
	I1208 00:32:06.036196  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036300  826329 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 00:32:06.036317  826329 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 00:32:06.036326  826329 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 00:32:06.036331  826329 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 00:32:06.036335  826329 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 00:32:06.036342  826329 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1208 00:32:06.036353  826329 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1208 00:32:06.036358  826329 command_runner.go:130] > # auto_reload_registries = false
	I1208 00:32:06.036365  826329 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1208 00:32:06.036377  826329 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1208 00:32:06.036388  826329 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1208 00:32:06.036393  826329 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1208 00:32:06.036398  826329 command_runner.go:130] > # The mode of short name resolution.
	I1208 00:32:06.036404  826329 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1208 00:32:06.036418  826329 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1208 00:32:06.036424  826329 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1208 00:32:06.036433  826329 command_runner.go:130] > # short_name_mode = "enforcing"
	I1208 00:32:06.036439  826329 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1208 00:32:06.036446  826329 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1208 00:32:06.036457  826329 command_runner.go:130] > # oci_artifact_mount_support = true
	I1208 00:32:06.036463  826329 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 00:32:06.036466  826329 command_runner.go:130] > # CNI plugins.
	I1208 00:32:06.036469  826329 command_runner.go:130] > [crio.network]
	I1208 00:32:06.036476  826329 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 00:32:06.036481  826329 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 00:32:06.036485  826329 command_runner.go:130] > # cni_default_network = ""
	I1208 00:32:06.036496  826329 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 00:32:06.036501  826329 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 00:32:06.036506  826329 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 00:32:06.036515  826329 command_runner.go:130] > # plugin_dirs = [
	I1208 00:32:06.036642  826329 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 00:32:06.036668  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036675  826329 command_runner.go:130] > # List of included pod metrics.
	I1208 00:32:06.036679  826329 command_runner.go:130] > # included_pod_metrics = [
	I1208 00:32:06.036860  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036921  826329 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 00:32:06.036927  826329 command_runner.go:130] > [crio.metrics]
	I1208 00:32:06.036932  826329 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 00:32:06.036937  826329 command_runner.go:130] > # enable_metrics = false
	I1208 00:32:06.036942  826329 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 00:32:06.036953  826329 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 00:32:06.036960  826329 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 00:32:06.036994  826329 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 00:32:06.037043  826329 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 00:32:06.037079  826329 command_runner.go:130] > # metrics_collectors = [
	I1208 00:32:06.037090  826329 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 00:32:06.037155  826329 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1208 00:32:06.037178  826329 command_runner.go:130] > # 	"containers_oom_total",
	I1208 00:32:06.037336  826329 command_runner.go:130] > # 	"processes_defunct",
	I1208 00:32:06.037413  826329 command_runner.go:130] > # 	"operations_total",
	I1208 00:32:06.037662  826329 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 00:32:06.037734  826329 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 00:32:06.037748  826329 command_runner.go:130] > # 	"operations_errors_total",
	I1208 00:32:06.037753  826329 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 00:32:06.037772  826329 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 00:32:06.037792  826329 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 00:32:06.037922  826329 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 00:32:06.037987  826329 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 00:32:06.038011  826329 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 00:32:06.038021  826329 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1208 00:32:06.038045  826329 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1208 00:32:06.038193  826329 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1208 00:32:06.038255  826329 command_runner.go:130] > # ]
	I1208 00:32:06.038268  826329 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1208 00:32:06.038283  826329 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1208 00:32:06.038321  826329 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 00:32:06.038335  826329 command_runner.go:130] > # metrics_port = 9090
	I1208 00:32:06.038341  826329 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 00:32:06.038408  826329 command_runner.go:130] > # metrics_socket = ""
	I1208 00:32:06.038423  826329 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 00:32:06.038430  826329 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 00:32:06.038449  826329 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 00:32:06.038461  826329 command_runner.go:130] > # certificate on any modification event.
	I1208 00:32:06.038588  826329 command_runner.go:130] > # metrics_cert = ""
	I1208 00:32:06.038614  826329 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 00:32:06.038622  826329 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 00:32:06.038740  826329 command_runner.go:130] > # metrics_key = ""
	I1208 00:32:06.038809  826329 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 00:32:06.038823  826329 command_runner.go:130] > [crio.tracing]
	I1208 00:32:06.038829  826329 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 00:32:06.038833  826329 command_runner.go:130] > # enable_tracing = false
	I1208 00:32:06.038876  826329 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 00:32:06.038890  826329 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1208 00:32:06.038899  826329 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1208 00:32:06.038973  826329 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 00:32:06.038987  826329 command_runner.go:130] > # CRI-O NRI configuration.
	I1208 00:32:06.038992  826329 command_runner.go:130] > [crio.nri]
	I1208 00:32:06.039013  826329 command_runner.go:130] > # Globally enable or disable NRI.
	I1208 00:32:06.039024  826329 command_runner.go:130] > # enable_nri = true
	I1208 00:32:06.039029  826329 command_runner.go:130] > # NRI socket to listen on.
	I1208 00:32:06.039033  826329 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1208 00:32:06.039044  826329 command_runner.go:130] > # NRI plugin directory to use.
	I1208 00:32:06.039198  826329 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1208 00:32:06.039225  826329 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1208 00:32:06.039233  826329 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1208 00:32:06.039239  826329 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1208 00:32:06.039363  826329 command_runner.go:130] > # nri_disable_connections = false
	I1208 00:32:06.039381  826329 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1208 00:32:06.039476  826329 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1208 00:32:06.039494  826329 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1208 00:32:06.039499  826329 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1208 00:32:06.039504  826329 command_runner.go:130] > # NRI default validator configuration.
	I1208 00:32:06.039511  826329 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1208 00:32:06.039518  826329 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1208 00:32:06.039557  826329 command_runner.go:130] > # can be restricted/rejected:
	I1208 00:32:06.039568  826329 command_runner.go:130] > # - OCI hook injection
	I1208 00:32:06.039573  826329 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1208 00:32:06.039586  826329 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1208 00:32:06.039595  826329 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1208 00:32:06.039600  826329 command_runner.go:130] > # - adjustment of linux namespaces
	I1208 00:32:06.039606  826329 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1208 00:32:06.039685  826329 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1208 00:32:06.039812  826329 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1208 00:32:06.039825  826329 command_runner.go:130] > #
	I1208 00:32:06.039830  826329 command_runner.go:130] > # [crio.nri.default_validator]
	I1208 00:32:06.039911  826329 command_runner.go:130] > # nri_enable_default_validator = false
	I1208 00:32:06.039939  826329 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1208 00:32:06.039947  826329 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1208 00:32:06.039959  826329 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1208 00:32:06.039966  826329 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1208 00:32:06.039971  826329 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1208 00:32:06.039975  826329 command_runner.go:130] > # nri_validator_required_plugins = [
	I1208 00:32:06.039978  826329 command_runner.go:130] > # ]
	I1208 00:32:06.039984  826329 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1208 00:32:06.039994  826329 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 00:32:06.040003  826329 command_runner.go:130] > [crio.stats]
	I1208 00:32:06.040013  826329 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 00:32:06.040019  826329 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 00:32:06.040027  826329 command_runner.go:130] > # stats_collection_period = 0
	I1208 00:32:06.040033  826329 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1208 00:32:06.040043  826329 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1208 00:32:06.040047  826329 command_runner.go:130] > # collection_period = 0
	I1208 00:32:06.041802  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994368044Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1208 00:32:06.041819  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994407331Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1208 00:32:06.041829  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994434752Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1208 00:32:06.041836  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994457826Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1208 00:32:06.041847  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994536038Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:06.041867  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994955873Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1208 00:32:06.041895  826329 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 00:32:06.042057  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:06.042089  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:06.042117  826329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:32:06.042147  826329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:32:06.042284  826329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:32:06.042367  826329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:32:06.049993  826329 command_runner.go:130] > kubeadm
	I1208 00:32:06.050024  826329 command_runner.go:130] > kubectl
	I1208 00:32:06.050029  826329 command_runner.go:130] > kubelet
	I1208 00:32:06.051018  826329 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:32:06.051091  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:32:06.059413  826329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:32:06.073688  826329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:32:06.087599  826329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:32:06.100920  826329 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:32:06.104607  826329 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:32:06.104862  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:06.223310  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:06.506702  826329 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:32:06.506774  826329 certs.go:195] generating shared ca certs ...
	I1208 00:32:06.506805  826329 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:06.507033  826329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:32:06.507124  826329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:32:06.507152  826329 certs.go:257] generating profile certs ...
	I1208 00:32:06.507310  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:32:06.507422  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:32:06.507510  826329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:32:06.507537  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:32:06.507566  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:32:06.507605  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:32:06.507636  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:32:06.507680  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:32:06.507713  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:32:06.507755  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:32:06.507788  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:32:06.507873  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:32:06.507940  826329 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:32:06.507964  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:32:06.508024  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:32:06.508086  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:32:06.508156  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:32:06.508255  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:06.508336  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.508374  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.508417  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.509152  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:32:06.534629  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:32:06.554458  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:32:06.573968  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:32:06.590997  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:32:06.608508  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:32:06.625424  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:32:06.642336  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:32:06.660002  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:32:06.677652  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:32:06.695647  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:32:06.713354  826329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:32:06.725836  826329 ssh_runner.go:195] Run: openssl version
	I1208 00:32:06.731951  826329 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:32:06.732096  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.739312  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:32:06.746650  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750259  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750312  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750360  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.790520  826329 command_runner.go:130] > 51391683
	I1208 00:32:06.791045  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:32:06.798345  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.805645  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:32:06.813042  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816781  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816807  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816859  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.857524  826329 command_runner.go:130] > 3ec20f2e
	I1208 00:32:06.857994  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:32:06.865262  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.872409  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:32:06.879529  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883021  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883115  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883198  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.923843  826329 command_runner.go:130] > b5213941
	I1208 00:32:06.924322  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:32:06.931656  826329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935287  826329 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935325  826329 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:32:06.935332  826329 command_runner.go:130] > Device: 259,1	Inode: 1322385     Links: 1
	I1208 00:32:06.935354  826329 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:06.935369  826329 command_runner.go:130] > Access: 2025-12-08 00:27:59.408752113 +0000
	I1208 00:32:06.935374  826329 command_runner.go:130] > Modify: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935396  826329 command_runner.go:130] > Change: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935407  826329 command_runner.go:130] >  Birth: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935530  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:32:06.975831  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:06.976261  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:32:07.017790  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.017978  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:32:07.058488  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.058966  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:32:07.099457  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.099917  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:32:07.141471  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.141903  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:32:07.182188  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.182659  826329 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:07.182760  826329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:32:07.182825  826329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:32:07.209144  826329 cri.go:89] found id: ""
	I1208 00:32:07.209214  826329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:32:07.216134  826329 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:32:07.216154  826329 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:32:07.216162  826329 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:32:07.217097  826329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:32:07.217114  826329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:32:07.217178  826329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:32:07.224428  826329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:32:07.224856  826329 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.224961  826329 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "functional-525396" cluster setting kubeconfig missing "functional-525396" context setting]
	I1208 00:32:07.225241  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.225667  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.225818  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.226341  826329 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:32:07.226363  826329 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:32:07.226369  826329 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:32:07.226375  826329 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:32:07.226381  826329 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:32:07.226674  826329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:32:07.226772  826329 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:32:07.234310  826329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:32:07.234378  826329 kubeadm.go:602] duration metric: took 17.25872ms to restartPrimaryControlPlane
	I1208 00:32:07.234395  826329 kubeadm.go:403] duration metric: took 51.743543ms to StartCluster
	I1208 00:32:07.234412  826329 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.234484  826329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.235129  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.235358  826329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:32:07.235583  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:07.235658  826329 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:32:07.235740  826329 addons.go:70] Setting storage-provisioner=true in profile "functional-525396"
	I1208 00:32:07.235754  826329 addons.go:239] Setting addon storage-provisioner=true in "functional-525396"
	I1208 00:32:07.235778  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.236237  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.236576  826329 addons.go:70] Setting default-storageclass=true in profile "functional-525396"
	I1208 00:32:07.236601  826329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-525396"
	I1208 00:32:07.236875  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.242309  826329 out.go:179] * Verifying Kubernetes components...
	I1208 00:32:07.245184  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:07.271460  826329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:32:07.274400  826329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.274424  826329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:32:07.274492  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.276071  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.276241  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.276512  826329 addons.go:239] Setting addon default-storageclass=true in "functional-525396"
	I1208 00:32:07.276540  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.276944  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.314823  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.318477  826329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:07.318497  826329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:32:07.318558  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.352646  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.447557  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:07.488721  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.519084  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.257520  826329 node_ready.go:35] waiting up to 6m0s for node "functional-525396" to be "Ready" ...
	I1208 00:32:08.257618  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257654  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257688  826329 retry.go:31] will retry after 154.925821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257654  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.257704  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257722  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257734  826329 retry.go:31] will retry after 240.899479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257750  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.258076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.413579  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.477856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.477934  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.477962  826329 retry.go:31] will retry after 471.79599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.499019  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.559244  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.559341  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.559365  826329 retry.go:31] will retry after 419.613997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.758693  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.758772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.759084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.950598  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.979140  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.022887  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.022933  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.022979  826329 retry.go:31] will retry after 789.955074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083550  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.083656  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083684  826329 retry.go:31] will retry after 584.522236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.668477  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.723720  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.727856  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.727932  826329 retry.go:31] will retry after 996.136704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.757987  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.758082  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.813684  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:09.865943  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.869391  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.869422  826329 retry.go:31] will retry after 1.082403251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.257910  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:10.258329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.724942  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:10.758490  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.758896  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.786956  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:10.787023  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.787045  826329 retry.go:31] will retry after 1.653307887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.952461  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:11.017630  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:11.017682  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.017706  826329 retry.go:31] will retry after 1.450018323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.257721  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.258081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.757826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.757911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.258016  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.258092  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.258398  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:12.258449  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.440941  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:12.468519  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:12.523147  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.523192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.523212  826329 retry.go:31] will retry after 1.808868247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537050  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.537096  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537115  826329 retry.go:31] will retry after 1.005297336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.758616  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.758689  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.758985  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.257733  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.542714  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:13.607721  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:13.607772  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.607793  826329 retry.go:31] will retry after 2.59048957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.758025  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.758103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.257759  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.257837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.332402  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:14.393856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:14.393908  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.393927  826329 retry.go:31] will retry after 3.003957784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.758447  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.758779  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:14.758833  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:15.258432  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.258504  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.258873  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.758697  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.758770  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.198619  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:16.257994  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.258110  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.258333  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.261663  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:16.261706  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.261724  826329 retry.go:31] will retry after 3.921003057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.758355  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.758442  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.758740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.258595  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.258667  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.259014  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:17.259070  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:17.398537  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:17.459046  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:17.459087  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.459108  826329 retry.go:31] will retry after 6.352068949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.758636  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.758713  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.759027  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.757758  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.758113  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.258205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.757895  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:19.758338  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:20.183008  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:20.244376  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.244427  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.244447  826329 retry.go:31] will retry after 4.642616038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.258603  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.258946  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.757858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.758256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.757997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.758369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:22.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.757950  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.758369  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.257963  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.258271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.758124  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.758456  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.758513  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.811708  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:23.877239  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:23.877286  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:23.877305  826329 retry.go:31] will retry after 3.991513365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.257726  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.757814  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.757890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.887652  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:24.946807  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:24.946870  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.946894  826329 retry.go:31] will retry after 6.868435312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:25.258372  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.258452  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.258751  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.758655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.759159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.759287  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:26.257937  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.258011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.258320  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.757849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.758164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.258591  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.758609  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.869339  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:27.929619  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:27.929669  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:27.929689  826329 retry.go:31] will retry after 5.640751927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:28.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.258197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:28.258246  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.757900  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.257906  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.758680  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.758746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.759010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:30.759051  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:31.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.258120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.757934  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.815479  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:31.877679  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:31.877725  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:31.877744  826329 retry.go:31] will retry after 9.288265427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:32.258204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.258274  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.258579  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.758594  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.758959  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.257805  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.258256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:33.258316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:33.570705  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:33.628260  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:33.631756  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.631797  826329 retry.go:31] will retry after 7.380803559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.758003  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.758091  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.257826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.257908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.757933  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.757723  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:35.758156  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:36.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.257953  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.258310  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.758204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.758282  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.758636  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:37.758697  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:38.258444  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.258520  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.258964  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.758657  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.758988  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.258591  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.259009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.757689  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.757764  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.758032  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.257724  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.257806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.258168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:40.258225  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:40.757812  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.757892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.013670  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:41.072281  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.076192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.076223  826329 retry.go:31] will retry after 30.64284814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.166454  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:41.227404  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.227446  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.227466  826329 retry.go:31] will retry after 28.006603896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.258583  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.258655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.758793  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.758886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.759193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.257895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.258236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:42.258293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.758154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.758523  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.258386  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.258459  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.258782  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.758542  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.758614  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.758961  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.258683  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.258759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:44.259091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.757800  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.758206  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.258097  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.259164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.757651  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.757746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.758010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.257735  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.257815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.258117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.757885  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.757969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.758288  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.758347  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:47.258326  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.258400  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.258685  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.758684  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.758763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.759114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.257709  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.757752  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.758123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:49.258218  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.757765  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.758188  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.258204  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.258253  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.757903  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.757978  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.758301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.757965  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.758392  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.758279  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:54.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.257882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.757818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.757897  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.258277  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.757925  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.758403  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.258035  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.258362  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.258678  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.258763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.259088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.757900  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.757974  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.258215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:58.258269  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.758311  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.257792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.258100  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.757787  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.257846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.758031  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.758108  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.757962  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.257983  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.258055  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.258387  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.258456  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:02.757985  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.758059  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.258055  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.258125  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.258438  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.757882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.257989  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.258481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:04.758118  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.758201  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.758485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.258270  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.758448  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.758527  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.758934  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.257684  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.257772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.258049  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:06.758206  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.258726  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.258824  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.259215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.758011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.758271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.257849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:08.758228  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:09.234960  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:09.258398  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.258467  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.258726  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.299771  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:09.299811  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.299830  826329 retry.go:31] will retry after 22.917133282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.758561  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.758640  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.758995  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.258770  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.258868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.259197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.757838  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.758190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.257813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:11.258179  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:11.719678  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:11.758124  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.758203  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.758476  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.779600  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:11.783324  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:11.783357  826329 retry.go:31] will retry after 27.574784486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:12.257740  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.258104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.258219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:13.258272  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:13.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.757988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.257958  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.258037  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.258315  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:15.258360  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:15.757919  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.757879  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.257963  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.258036  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.258357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:17.258414  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.758272  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.758354  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.758668  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.258406  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.258487  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.258798  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.758471  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.758544  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.758891  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.258691  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.258772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.259134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.259190  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.757739  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.758088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.757870  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.757943  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.758290  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.757993  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.257852  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.258182  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:24.258220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.757878  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.758349  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.258345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.257811  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.258284  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.758040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.258252  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.258330  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.258588  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.758645  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.758735  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.759079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.758067  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.758108  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.757789  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.257875  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.257941  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.258210  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.757889  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.758308  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.257774  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.757714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.757784  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.758087  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.217681  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:32.258110  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.258497  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.272413  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:32.276021  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.276065  826329 retry.go:31] will retry after 31.830018043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.258151  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.258517  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.758451  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.258598  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.259035  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.758635  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.758714  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.759056  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.257714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.258111  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.758267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.257939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.757891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.258214  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.258289  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.258578  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:37.258623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.758354  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.758421  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.758674  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.258403  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.258497  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.258867  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.758486  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.758558  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.758906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.258694  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.258758  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.259030  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.259072  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.358376  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:39.412374  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416050  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416143  826329 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:33:39.758638  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.758720  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.759108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.757846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.757931  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.257809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.757977  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.758050  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.758393  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:42.258098  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.258182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.258488  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.758485  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.758557  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.758915  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.258576  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.258649  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.258992  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.757700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.757773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.758038  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.257757  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.258132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:44.258184  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.757809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.757999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.758336  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.258084  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.258468  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:46.258519  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.758126  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.758195  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.758462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.258480  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.258906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.758307  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.257842  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.758219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.758291  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:49.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.258184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.757922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.757790  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.257971  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.258282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:51.258346  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.757834  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.757908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.758182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.758452  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.258459  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.258900  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:53.258955  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.758700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.758780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.759083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.258123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.758170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:55.758182  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:56.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.757939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.758018  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.758340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.258337  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.258409  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.258677  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.758592  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:57.759063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:58.257674  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.257773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.757693  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.757771  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.758081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.258187  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.758199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.265698  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.265780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.266096  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:00.266143  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:00.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.757872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.258053  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.257892  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.258340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.758185  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.758273  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.758590  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:02.758643  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:03.258621  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.258702  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.757895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.758191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.106865  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:34:04.166273  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166323  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166403  826329 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:34:04.169502  826329 out.go:179] * Enabled addons: 
	I1208 00:34:04.171536  826329 addons.go:530] duration metric: took 1m56.935875389s for enable addons: enabled=[]
	I1208 00:34:04.258604  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.258682  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.259013  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.758662  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.758731  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.759011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:04.759062  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:05.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.758048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.758370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.257730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.258101  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.758131  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.758204  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.758570  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.258500  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.258586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.258950  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:07.259055  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:07.757997  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.758357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.257713  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.257788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.258063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:09.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:10.257921  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.258346  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.757735  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.757804  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.758062  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.757910  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:11.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:12.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.258391  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.757907  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.757979  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.258000  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.258079  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.757976  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.758046  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.758318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:14.258216  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:14.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.758229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.257940  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.258013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.258338  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:16.258395  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:16.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.758127  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.258701  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.258775  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.757896  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.758282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.257973  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.258048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.757762  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:18.758243  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.258352  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.758033  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.758409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.757890  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.757981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.758323  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:20.758384  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:21.257944  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.258010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.758322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.257850  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.257925  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.258270  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.758019  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.758365  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:22.758408  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:23.258071  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.258151  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.258491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.758281  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.758363  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.758707  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.258477  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.258561  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.759183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.759247  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:25.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.258000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.757806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.758120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.258248  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.757971  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.758380  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.258327  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.258401  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.258666  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:27.258716  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.758723  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.758798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.759103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.258027  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.258370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.758085  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.758508  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:29.758566  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:30.258264  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.258340  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.258608  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.758360  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.758437  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.758793  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.258627  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.258701  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.259047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.757815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.758076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.257780  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:32.258235  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:32.758097  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.758176  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.258283  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.258362  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.258621  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.758421  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.758509  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.758874  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.258773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.259148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:34.259210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.757843  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.757921  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.757995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.758360  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.257977  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.258049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.757866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.758233  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:37.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.257964  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.258296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.758129  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.758200  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.758490  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.258191  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.258269  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.758454  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.758534  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.758898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.758959  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:39.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.258627  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.258916  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.758708  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.759139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.257796  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.757783  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.758212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:41.258249  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:41.757913  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.758308  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.758011  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.758449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.258150  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.258227  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.258566  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:43.258632  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.758358  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.758430  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.758722  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.258546  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.259073  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.757871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.257935  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.258485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.758673  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.758756  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.759202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:46.257864  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.257946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.258291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.758013  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.258513  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.258598  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.259004  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.757974  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.758047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.257839  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.258125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:48.258175  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.757743  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.757816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.758138  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.257906  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.758137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.257875  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:50.258267  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.757934  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.758014  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.758361  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.258044  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.258119  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.258431  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.758821  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.758917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.759213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.757986  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.758060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.758375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:52.758428  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:53.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.758227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.757810  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.757886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.257839  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.257917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:55.258313  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:55.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.757796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.757854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.758141  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:57.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:57.758246  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.758647  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.258478  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.258560  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.258910  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.257905  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.258259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.758063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.758436  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:59.758494  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:00.270583  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.271106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.271544  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.758373  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.758448  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.758792  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.258597  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.259052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.257942  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.258019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.258319  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:02.258369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:02.758254  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.758335  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.758657  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.258485  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.258576  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.258926  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.757769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.258084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:04.758220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:05.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.257988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.258274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.257890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.258218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.758268  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:07.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.258264  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.258524  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.758503  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.758579  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.758911  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.258711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.258788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.259165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.758114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:09.258314  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:09.757867  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.257728  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.758154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.257828  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.257901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:11.758292  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.758010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.758331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.257734  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.258128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.757740  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.758156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.257879  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.257958  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:14.258372  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:14.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.258226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.757850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:16.758262  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:17.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.758126  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.258225  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.757982  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.758084  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:18.758496  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:19.258078  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.258148  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.258462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.758152  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.257773  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.257847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.258174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.757731  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.758079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:21.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:21.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.758255  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.258007  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.258298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.757958  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.758029  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.257782  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.757721  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.757792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:23.758157  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.257916  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.757747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.757838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.257741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.258153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:25.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.257792  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.257867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.258190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.757716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.757791  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.758047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.257747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.257826  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.258159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.757938  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.758339  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:27.758399  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:28.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.257817  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.258135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.758185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.257754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.757884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.758247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.258359  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:30.258416  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:30.758069  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.758447  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.257716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.757859  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.258342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.758262  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.758582  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:32.758623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.258445  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.258519  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.258864  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.758759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.759120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.757780  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.257854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.258302  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:35.757946  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.758342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.258034  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.258106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.758092  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.758170  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.758498  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.258371  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.258441  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.258740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.258804  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:37.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.758737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.759093  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.758009  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.758085  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.758354  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.258253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.758008  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.758083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.758427  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:39.758481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.258151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.757846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.758147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.258244  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.757920  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.757992  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.758263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.257833  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.258385  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.258459  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.758115  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.758189  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.758495  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.258231  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.258593  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.758356  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.758433  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.758767  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.258451  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.258526  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.258817  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.258887  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:44.758589  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.758661  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.758935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.257830  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.757933  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.257995  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.258070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.258330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.757844  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.758227  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.757930  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.758268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.757753  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.258251  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.758020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.758330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.258077  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.258159  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.258484  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.757837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.757936  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.758281  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.758053  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.758433  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.258161  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.258558  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.758318  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.758393  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.758646  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.758686  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.258483  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.258562  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.258917  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.758694  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.758792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.759186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.257832  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.258147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.757780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.758109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.257711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.258202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.257884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.257966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.758093  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.258229  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.258576  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.258619  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:58.758339  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.758413  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.758719  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.258566  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.258656  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.259028  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.757811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.758074  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.258301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.757822  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.757896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:00.758231  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.257745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.258119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.757848  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.758161  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.257756  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.758045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:02.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:03.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.757799  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.758980  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:36:04.257702  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.258057  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.758149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.257856  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:05.757874  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.757952  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.758274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.257951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.258331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.758228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.258156  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.258257  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.258603  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:07.258657  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:07.758639  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.758722  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.759070  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.257829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.757812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.257802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.257878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.758023  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:09.758454  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:10.258096  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.258168  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.757867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.257926  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.258015  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.758043  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.758118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:12.258271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:12.758147  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.758239  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.758564  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.258372  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.258650  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.758403  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.758476  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.758795  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.258438  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.258516  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.258865  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:14.258923  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:14.758558  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.758632  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.257698  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.257781  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.258012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.258318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.757852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.758196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:16.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:17.257965  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.258040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.757949  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.257775  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.257850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.757883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.258195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:19.757899  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.257800  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.258270  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:21.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.258048  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.258121  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.757988  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.758096  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.758420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:23.258320  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:23.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.758051  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.758371  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.258081  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.258509  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.758321  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.758398  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.758744  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.258469  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.258537  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.258876  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:25.258924  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:25.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.758727  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.759090  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.757942  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.758194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.257841  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.257927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.758332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:27.758386  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:28.257969  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.258045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.758107  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.757822  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.758078  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.257824  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.257913  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.258331  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:30.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.757915  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.257869  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.257937  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.257781  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.757940  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:32.758305  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.258196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.758193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.257750  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.757815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.757887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.257918  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.257997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.258317  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.258379  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:35.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.757819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.758135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.257783  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.258193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.758166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.258659  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.258733  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.259083  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:37.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.758024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.758345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.757932  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.758013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.758289  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.757952  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:39.758433  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.257793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.258042  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.257744  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:42.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.758448  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.757926  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.258047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.258465  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:44.757755  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.757827  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.257829  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.257930  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.758253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.757828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:46.758229  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.257985  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.258332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.757967  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.758296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.257872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.757878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:48.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:49.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.757898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.758139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:51.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.257880  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.258144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:51.258193  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:51.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.758200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.257870  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.258287  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.758014  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.758414  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:53.258138  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.258234  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.258594  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:53.258654  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:53.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.257895  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.257969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.258267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.758150  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:55.758195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:56.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.258194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:56.757733  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.758064  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.258687  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.258769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.259122  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.757909  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.757984  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:57.758349  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:58.257827  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.257904  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:58.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.758197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.257858  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.257940  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.758280  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:00.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.258083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.258409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:00.258457  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:00.758379  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.758466  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.758803  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.258644  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.258737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.259037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:02.758316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:03.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.258232  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:03.757961  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.758042  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.758415  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.258085  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.258154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.258494  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.758211  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.758302  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.758664  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:04.758720  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:05.258496  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.258572  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.258935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:05.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.757745  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.758009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.258149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.758260  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:07.258197  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.258266  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.258533  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:07.258574  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:07.758487  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.758564  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.758919  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.258731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.258806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.259157  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.757712  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.757783  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.758052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.758285  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:09.758354  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:10.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.257812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.258068  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:10.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.758172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.758165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:12.257867  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:12.258328  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:12.758227  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.758306  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.758623  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.258376  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.258454  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.258723  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.758551  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.758624  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.758979  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.757823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:14.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:15.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:15.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.758236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.257917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.258276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:16.758276  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:17.257980  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.258060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:17.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.758343  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.258231  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.757795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.757884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.758230  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:19.257736  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:19.258185  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:19.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.257828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.757722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.757789  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.758063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:21.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:21.258238  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:21.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.758000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.257738  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.257820  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.758012  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.758097  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.758430  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.257876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.258177  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.757901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:23.758293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:24.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:24.757779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.758189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.258103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:26.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.258263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:26.258318  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:26.757964  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.758030  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.758273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.258297  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.258369  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.258691  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.758719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.758793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.759134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.257821  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:28.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:29.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:29.757719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.757786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.758037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.258173  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.757761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.758153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:31.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.257787  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.258040  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:31.258078  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:31.757746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.757831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.257904  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.758153  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.758406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:33.257779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.258158  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:33.258205  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:33.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.757959  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.257990  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.258252  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.758130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.257853  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.258198  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:35.258259  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:35.757729  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.757808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.758125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.257840  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:37.258028  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.258098  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.258344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:37.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:37.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.758350  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.757892  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:39.758261  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:40.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.257976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.258247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:40.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.758250  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.757732  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.758046  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:42.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.258257  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:42.258317  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.758145  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.758527  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.258368  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.258629  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.758381  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.758456  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:44.258642  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.258728  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.259104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:44.259162  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:44.757666  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.757747  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.758033  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.258118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.258898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.758751  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.759069  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:46.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.258765  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.259139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:46.259195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:46.757764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.758163  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.258575  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.757955  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.758294  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.757898  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:48.758358  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:49.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.258126  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:49.757824  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.757899  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.258201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.757976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:51.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.257834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:51.258245  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:51.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.758176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.257907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.757998  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.758067  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.758400  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.257761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.257831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.258156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.758051  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:53.758091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:54.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:54.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.258107  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.757840  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.758276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:55.758329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:56.257991  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.258063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:56.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.758080  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.257909  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.258228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.757928  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.758314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:57.758371  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:58.257725  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:58.757817  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.758235  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.257927  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.258328  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.757914  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:00.257912  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.258367  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:00.258421  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:00.758080  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.758156  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.758491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.258328  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.258416  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.258737  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.758951  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.257691  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.257768  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.258118  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:02.758341  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:03.258024  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.258103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.258449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:03.758162  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.758236  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.758778  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.258999  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.757698  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:05.257820  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:05.258295  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:05.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.257819  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.757775  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:07.262532  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.262623  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.263011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:07.263063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:07.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:08.257967  826329 node_ready.go:38] duration metric: took 6m0.00040399s for node "functional-525396" to be "Ready" ...
	I1208 00:38:08.261085  826329 out.go:203] 
	W1208 00:38:08.263874  826329 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:38:08.263896  826329 out.go:285] * 
	* 
	W1208 00:38:08.266040  826329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:38:08.269117  826329 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-525396 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.194293466s for "functional-525396" cluster.
I1208 00:38:08.896283  791807 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (342.87039ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 logs -n 25: (1.062140336s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /usr/share/ca-certificates/7918072.pem                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save kicbase/echo-server:functional-714395 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image rm kicbase/echo-server:functional-714395 --alsologtostderr                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format short --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format yaml --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format json --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format table --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh pgrep buildkitd                                                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image          │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                                    │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete         │ -p functional-714395                                                                                                                                      │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start          │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-525396 --alsologtostderr -v=8                                                                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:32:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:32:02.748489  826329 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:32:02.748673  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748687  826329 out.go:374] Setting ErrFile to fd 2...
	I1208 00:32:02.748692  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748975  826329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:32:02.749379  826329 out.go:368] Setting JSON to false
	I1208 00:32:02.750240  826329 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18855,"bootTime":1765135068,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:32:02.750321  826329 start.go:143] virtualization:  
	I1208 00:32:02.755521  826329 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:32:02.759227  826329 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:32:02.759498  826329 notify.go:221] Checking for updates...
	I1208 00:32:02.765171  826329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:32:02.768668  826329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:02.771686  826329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:32:02.774728  826329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:32:02.777727  826329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:32:02.781794  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:02.781971  826329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:32:02.823053  826329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:32:02.823186  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.879429  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.869702269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.879546  826329 docker.go:319] overlay module found
	I1208 00:32:02.884410  826329 out.go:179] * Using the docker driver based on existing profile
	I1208 00:32:02.887311  826329 start.go:309] selected driver: docker
	I1208 00:32:02.887330  826329 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.887447  826329 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:32:02.887565  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.942385  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.932846048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.942810  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:02.942902  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:02.942960  826329 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.948301  826329 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:32:02.951106  826329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:32:02.954049  826329 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:32:02.956917  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:02.956968  826329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:32:02.956999  826329 cache.go:65] Caching tarball of preloaded images
	I1208 00:32:02.957004  826329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:32:02.957092  826329 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:32:02.957103  826329 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:32:02.957210  826329 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:32:02.976499  826329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:32:02.976524  826329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:32:02.976543  826329 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:32:02.976579  826329 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:32:02.976652  826329 start.go:364] duration metric: took 48.116µs to acquireMachinesLock for "functional-525396"
	I1208 00:32:02.976674  826329 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:32:02.976683  826329 fix.go:54] fixHost starting: 
	I1208 00:32:02.976940  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:02.996203  826329 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:32:02.996234  826329 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:32:02.999434  826329 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:32:02.999477  826329 machine.go:94] provisionDockerMachine start ...
	I1208 00:32:02.999559  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.021375  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.021746  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.021762  826329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:32:03.174523  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.174550  826329 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:32:03.174616  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.192743  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.193067  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.193084  826329 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:32:03.356577  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.356704  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.375055  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.375394  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.375419  826329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:32:03.529767  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:32:03.529793  826329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:32:03.529822  826329 ubuntu.go:190] setting up certificates
	I1208 00:32:03.529839  826329 provision.go:84] configureAuth start
	I1208 00:32:03.529901  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:03.552219  826329 provision.go:143] copyHostCerts
	I1208 00:32:03.552258  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552298  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:32:03.552310  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552383  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:32:03.552464  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552480  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:32:03.552484  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552511  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:32:03.552550  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552566  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:32:03.552570  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552592  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:32:03.552642  826329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:32:03.707027  826329 provision.go:177] copyRemoteCerts
	I1208 00:32:03.707105  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:32:03.707150  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.724035  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:03.830514  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:32:03.830586  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:32:03.848126  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:32:03.848238  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:32:03.865293  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:32:03.865368  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:32:03.882781  826329 provision.go:87] duration metric: took 352.917637ms to configureAuth
	I1208 00:32:03.882808  826329 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:32:03.883086  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:03.883204  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.900405  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.900722  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.900745  826329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:32:04.247102  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:32:04.247132  826329 machine.go:97] duration metric: took 1.247646186s to provisionDockerMachine
	I1208 00:32:04.247143  826329 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:32:04.247156  826329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:32:04.247233  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:32:04.247291  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.269420  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.374672  826329 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:32:04.377926  826329 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:32:04.377948  826329 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:32:04.377953  826329 command_runner.go:130] > VERSION_ID="12"
	I1208 00:32:04.377958  826329 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:32:04.377964  826329 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:32:04.377968  826329 command_runner.go:130] > ID=debian
	I1208 00:32:04.377973  826329 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:32:04.377998  826329 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:32:04.378009  826329 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:32:04.378363  826329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:32:04.378386  826329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:32:04.378397  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:32:04.378453  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:32:04.378535  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:32:04.378546  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 00:32:04.378621  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:32:04.378628  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> /etc/test/nested/copy/791807/hosts
	I1208 00:32:04.378672  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:32:04.386632  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:04.404202  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:32:04.421545  826329 start.go:296] duration metric: took 174.385446ms for postStartSetup
	I1208 00:32:04.421649  826329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:32:04.421695  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.439941  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.543929  826329 command_runner.go:130] > 13%
	I1208 00:32:04.544005  826329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:32:04.548692  826329 command_runner.go:130] > 169G
	I1208 00:32:04.548719  826329 fix.go:56] duration metric: took 1.572034198s for fixHost
	I1208 00:32:04.548730  826329 start.go:83] releasing machines lock for "functional-525396", held for 1.572067364s
	I1208 00:32:04.548856  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:04.565574  826329 ssh_runner.go:195] Run: cat /version.json
	I1208 00:32:04.565638  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.565923  826329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:32:04.565984  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.584847  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.600519  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.771794  826329 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:32:04.774495  826329 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:32:04.774657  826329 ssh_runner.go:195] Run: systemctl --version
	I1208 00:32:04.780874  826329 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:32:04.780917  826329 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:32:04.781367  826329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:32:04.818112  826329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:32:04.822491  826329 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:32:04.822532  826329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:32:04.822595  826329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:32:04.830492  826329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:32:04.830518  826329 start.go:496] detecting cgroup driver to use...
	I1208 00:32:04.830579  826329 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:32:04.830661  826329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:32:04.846467  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:32:04.859999  826329 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:32:04.860093  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:32:04.876040  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:32:04.889316  826329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:32:04.999380  826329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:32:05.135529  826329 docker.go:234] disabling docker service ...
	I1208 00:32:05.135652  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:32:05.150887  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:32:05.164082  826329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:32:05.274195  826329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:32:05.386139  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:32:05.399321  826329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:32:05.411741  826329 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 00:32:05.412925  826329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:32:05.413007  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.421375  826329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:32:05.421462  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.430145  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.438751  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.447666  826329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:32:05.455572  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.464290  826329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.472537  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.481189  826329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:32:05.487727  826329 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:32:05.488614  826329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:32:05.496261  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:05.603146  826329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:32:05.769023  826329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:32:05.769169  826329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:32:05.773391  826329 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 00:32:05.773452  826329 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:32:05.773473  826329 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1208 00:32:05.773494  826329 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:05.773524  826329 command_runner.go:130] > Access: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773553  826329 command_runner.go:130] > Modify: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773581  826329 command_runner.go:130] > Change: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773598  826329 command_runner.go:130] >  Birth: -
	I1208 00:32:05.774292  826329 start.go:564] Will wait 60s for crictl version
	I1208 00:32:05.774387  826329 ssh_runner.go:195] Run: which crictl
	I1208 00:32:05.778688  826329 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:32:05.779547  826329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:32:05.803509  826329 command_runner.go:130] > Version:  0.1.0
	I1208 00:32:05.803790  826329 command_runner.go:130] > RuntimeName:  cri-o
	I1208 00:32:05.804036  826329 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1208 00:32:05.804294  826329 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:32:05.806608  826329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:32:05.806739  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.840244  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.840321  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.840340  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.840361  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.840391  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.840415  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.840434  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.840452  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.840471  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.840498  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.840519  826329 command_runner.go:130] >      static
	I1208 00:32:05.840536  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.840553  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.840567  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.840593  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.840612  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.840629  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.840647  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.840664  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.840690  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.841800  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.872333  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.872357  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.872369  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.872376  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.872381  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.872385  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.872389  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.872395  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.872399  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.872408  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.872412  826329 command_runner.go:130] >      static
	I1208 00:32:05.872422  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.872437  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.872444  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.872448  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.872451  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.872457  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.872463  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.872467  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.872480  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.877414  826329 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:32:05.880269  826329 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:32:05.896780  826329 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:32:05.900764  826329 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:32:05.900873  826329 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:32:05.900985  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:05.901051  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.935654  826329 command_runner.go:130] > {
	I1208 00:32:05.935679  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.935684  826329 command_runner.go:130] >     {
	I1208 00:32:05.935694  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.935699  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935705  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.935708  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935713  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935724  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.935736  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.935743  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935756  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.935763  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935768  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935772  826329 command_runner.go:130] >     },
	I1208 00:32:05.935775  826329 command_runner.go:130] >     {
	I1208 00:32:05.935781  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.935787  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935793  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.935796  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935800  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935810  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.935821  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.935825  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935829  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.935836  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935845  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935853  826329 command_runner.go:130] >     },
	I1208 00:32:05.935857  826329 command_runner.go:130] >     {
	I1208 00:32:05.935864  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.935870  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935876  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.935879  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935885  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935894  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.935905  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.935908  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935912  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.935917  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.935923  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935927  826329 command_runner.go:130] >     },
	I1208 00:32:05.935932  826329 command_runner.go:130] >     {
	I1208 00:32:05.935938  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.935946  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935956  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.935962  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935967  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935975  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.935986  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.935990  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935994  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.936001  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936006  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936011  826329 command_runner.go:130] >       },
	I1208 00:32:05.936021  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936028  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936031  826329 command_runner.go:130] >     },
	I1208 00:32:05.936034  826329 command_runner.go:130] >     {
	I1208 00:32:05.936041  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.936048  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936053  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.936057  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936063  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936072  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.936083  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.936087  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936091  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.936095  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936101  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936105  826329 command_runner.go:130] >       },
	I1208 00:32:05.936110  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936116  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936119  826329 command_runner.go:130] >     },
	I1208 00:32:05.936122  826329 command_runner.go:130] >     {
	I1208 00:32:05.936129  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.936136  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936143  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.936152  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936160  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936169  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.936179  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.936184  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936189  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.936195  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936199  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936203  826329 command_runner.go:130] >       },
	I1208 00:32:05.936207  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936215  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936219  826329 command_runner.go:130] >     },
	I1208 00:32:05.936222  826329 command_runner.go:130] >     {
	I1208 00:32:05.936228  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.936235  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936240  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.936244  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936255  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936263  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.936271  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.936277  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936282  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.936288  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936292  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936295  826329 command_runner.go:130] >     },
	I1208 00:32:05.936298  826329 command_runner.go:130] >     {
	I1208 00:32:05.936306  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.936313  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936318  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.936322  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936326  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936336  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.936362  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.936372  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936377  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.936387  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936391  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936395  826329 command_runner.go:130] >       },
	I1208 00:32:05.936406  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936410  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936414  826329 command_runner.go:130] >     },
	I1208 00:32:05.936417  826329 command_runner.go:130] >     {
	I1208 00:32:05.936424  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.936432  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936437  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.936441  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936445  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936455  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.936465  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.936469  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936473  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.936483  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936487  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.936490  826329 command_runner.go:130] >       },
	I1208 00:32:05.936500  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936504  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.936507  826329 command_runner.go:130] >     }
	I1208 00:32:05.936510  826329 command_runner.go:130] >   ]
	I1208 00:32:05.936513  826329 command_runner.go:130] > }
	I1208 00:32:05.936690  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.936705  826329 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:32:05.936757  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.965491  826329 command_runner.go:130] > {
	I1208 00:32:05.965510  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.965515  826329 command_runner.go:130] >     {
	I1208 00:32:05.965525  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.965542  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965549  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.965553  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965557  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965584  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.965593  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.965596  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965600  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.965604  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965614  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965618  826329 command_runner.go:130] >     },
	I1208 00:32:05.965620  826329 command_runner.go:130] >     {
	I1208 00:32:05.965627  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.965630  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965635  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.965639  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965642  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965650  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.965659  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.965662  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965666  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.965669  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965675  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965679  826329 command_runner.go:130] >     },
	I1208 00:32:05.965682  826329 command_runner.go:130] >     {
	I1208 00:32:05.965689  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.965692  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965700  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.965704  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965708  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965715  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.965723  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.965726  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965733  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.965738  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.965741  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965744  826329 command_runner.go:130] >     },
	I1208 00:32:05.965747  826329 command_runner.go:130] >     {
	I1208 00:32:05.965754  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.965758  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965763  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.965768  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965772  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965779  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.965786  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.965789  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965793  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.965796  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965800  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965803  826329 command_runner.go:130] >       },
	I1208 00:32:05.965811  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965815  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965818  826329 command_runner.go:130] >     },
	I1208 00:32:05.965821  826329 command_runner.go:130] >     {
	I1208 00:32:05.965827  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.965831  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965841  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.965844  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965848  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965859  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.965867  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.965870  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965874  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.965877  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965881  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965884  826329 command_runner.go:130] >       },
	I1208 00:32:05.965891  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965895  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965898  826329 command_runner.go:130] >     },
	I1208 00:32:05.965901  826329 command_runner.go:130] >     {
	I1208 00:32:05.965907  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.965911  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965917  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.965920  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965924  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965932  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.965944  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.965947  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965951  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.965954  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965958  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965961  826329 command_runner.go:130] >       },
	I1208 00:32:05.965964  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965968  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965971  826329 command_runner.go:130] >     },
	I1208 00:32:05.965974  826329 command_runner.go:130] >     {
	I1208 00:32:05.965980  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.965984  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965989  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.965992  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965995  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966003  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.966013  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.966016  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966020  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.966023  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966027  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966030  826329 command_runner.go:130] >     },
	I1208 00:32:05.966033  826329 command_runner.go:130] >     {
	I1208 00:32:05.966042  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.966046  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966051  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.966054  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966058  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966066  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.966082  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.966086  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966090  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.966094  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966097  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.966100  826329 command_runner.go:130] >       },
	I1208 00:32:05.966104  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966109  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966112  826329 command_runner.go:130] >     },
	I1208 00:32:05.966117  826329 command_runner.go:130] >     {
	I1208 00:32:05.966124  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.966127  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966131  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.966136  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966140  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966149  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.966156  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.966160  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966163  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.966167  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966171  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.966173  826329 command_runner.go:130] >       },
	I1208 00:32:05.966177  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966180  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.966183  826329 command_runner.go:130] >     }
	I1208 00:32:05.966186  826329 command_runner.go:130] >   ]
	I1208 00:32:05.966189  826329 command_runner.go:130] > }
	I1208 00:32:05.968541  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.968564  826329 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:32:05.968572  826329 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:32:05.968676  826329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:32:05.968759  826329 ssh_runner.go:195] Run: crio config
	I1208 00:32:06.017314  826329 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 00:32:06.017338  826329 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 00:32:06.017347  826329 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 00:32:06.017350  826329 command_runner.go:130] > #
	I1208 00:32:06.017357  826329 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 00:32:06.017363  826329 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 00:32:06.017370  826329 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 00:32:06.017378  826329 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 00:32:06.017384  826329 command_runner.go:130] > # reload'.
	I1208 00:32:06.017391  826329 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 00:32:06.017404  826329 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 00:32:06.017411  826329 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 00:32:06.017417  826329 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 00:32:06.017423  826329 command_runner.go:130] > [crio]
	I1208 00:32:06.017429  826329 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 00:32:06.017434  826329 command_runner.go:130] > # containers images, in this directory.
	I1208 00:32:06.017704  826329 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 00:32:06.017722  826329 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 00:32:06.017729  826329 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1208 00:32:06.017738  826329 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1208 00:32:06.017898  826329 command_runner.go:130] > # imagestore = ""
	I1208 00:32:06.017914  826329 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 00:32:06.017922  826329 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 00:32:06.018164  826329 command_runner.go:130] > # storage_driver = "overlay"
	I1208 00:32:06.018180  826329 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 00:32:06.018187  826329 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 00:32:06.018278  826329 command_runner.go:130] > # storage_option = [
	I1208 00:32:06.018455  826329 command_runner.go:130] > # ]
	I1208 00:32:06.018487  826329 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 00:32:06.018500  826329 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 00:32:06.018675  826329 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 00:32:06.018694  826329 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 00:32:06.018706  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 00:32:06.018719  826329 command_runner.go:130] > # always happen on a node reboot
	I1208 00:32:06.018990  826329 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 00:32:06.019024  826329 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 00:32:06.019035  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 00:32:06.019041  826329 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 00:32:06.019224  826329 command_runner.go:130] > # version_file_persist = ""
	I1208 00:32:06.019243  826329 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 00:32:06.019258  826329 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 00:32:06.019484  826329 command_runner.go:130] > # internal_wipe = true
	I1208 00:32:06.019500  826329 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1208 00:32:06.019507  826329 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1208 00:32:06.019754  826329 command_runner.go:130] > # internal_repair = true
	I1208 00:32:06.019769  826329 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 00:32:06.019785  826329 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 00:32:06.019793  826329 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 00:32:06.020120  826329 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 00:32:06.020138  826329 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 00:32:06.020143  826329 command_runner.go:130] > [crio.api]
	I1208 00:32:06.020148  826329 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 00:32:06.020346  826329 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 00:32:06.020366  826329 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 00:32:06.020581  826329 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 00:32:06.020605  826329 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 00:32:06.020611  826329 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 00:32:06.020863  826329 command_runner.go:130] > # stream_port = "0"
	I1208 00:32:06.020878  826329 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 00:32:06.021158  826329 command_runner.go:130] > # stream_enable_tls = false
	I1208 00:32:06.021176  826329 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 00:32:06.021352  826329 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 00:32:06.021367  826329 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 00:32:06.021380  826329 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021617  826329 command_runner.go:130] > # stream_tls_cert = ""
	I1208 00:32:06.021634  826329 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 00:32:06.021641  826329 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021794  826329 command_runner.go:130] > # stream_tls_key = ""
	I1208 00:32:06.021808  826329 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 00:32:06.021824  826329 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 00:32:06.021840  826329 command_runner.go:130] > # automatically pick up the changes.
	I1208 00:32:06.022038  826329 command_runner.go:130] > # stream_tls_ca = ""
	I1208 00:32:06.022075  826329 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022282  826329 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 00:32:06.022297  826329 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022560  826329 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 00:32:06.022581  826329 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 00:32:06.022589  826329 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 00:32:06.022596  826329 command_runner.go:130] > [crio.runtime]
	I1208 00:32:06.022603  826329 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 00:32:06.022613  826329 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 00:32:06.022618  826329 command_runner.go:130] > # "nofile=1024:2048"
	I1208 00:32:06.022627  826329 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 00:32:06.022736  826329 command_runner.go:130] > # default_ulimits = [
	I1208 00:32:06.022966  826329 command_runner.go:130] > # ]
	I1208 00:32:06.022982  826329 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 00:32:06.023192  826329 command_runner.go:130] > # no_pivot = false
	I1208 00:32:06.023203  826329 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 00:32:06.023210  826329 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 00:32:06.023435  826329 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 00:32:06.023449  826329 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 00:32:06.023455  826329 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 00:32:06.023463  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023655  826329 command_runner.go:130] > # conmon = ""
	I1208 00:32:06.023668  826329 command_runner.go:130] > # Cgroup setting for conmon
	I1208 00:32:06.023697  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 00:32:06.023812  826329 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 00:32:06.023826  826329 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 00:32:06.023831  826329 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 00:32:06.023839  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023982  826329 command_runner.go:130] > # conmon_env = [
	I1208 00:32:06.024123  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024147  826329 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 00:32:06.024153  826329 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 00:32:06.024161  826329 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 00:32:06.024313  826329 command_runner.go:130] > # default_env = [
	I1208 00:32:06.024407  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024424  826329 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 00:32:06.024439  826329 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1208 00:32:06.024689  826329 command_runner.go:130] > # selinux = false
	I1208 00:32:06.024713  826329 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 00:32:06.024722  826329 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1208 00:32:06.024727  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.024963  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.024977  826329 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1208 00:32:06.024983  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025171  826329 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1208 00:32:06.025185  826329 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 00:32:06.025199  826329 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 00:32:06.025214  826329 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 00:32:06.025222  826329 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 00:32:06.025227  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025459  826329 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 00:32:06.025474  826329 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 00:32:06.025479  826329 command_runner.go:130] > # the cgroup blockio controller.
	I1208 00:32:06.025701  826329 command_runner.go:130] > # blockio_config_file = ""
	I1208 00:32:06.025716  826329 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1208 00:32:06.025721  826329 command_runner.go:130] > # blockio parameters.
	I1208 00:32:06.025998  826329 command_runner.go:130] > # blockio_reload = false
	I1208 00:32:06.026018  826329 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 00:32:06.026025  826329 command_runner.go:130] > # irqbalance daemon.
	I1208 00:32:06.026221  826329 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 00:32:06.026241  826329 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1208 00:32:06.026249  826329 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1208 00:32:06.026257  826329 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1208 00:32:06.026494  826329 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1208 00:32:06.026510  826329 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 00:32:06.026517  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.026722  826329 command_runner.go:130] > # rdt_config_file = ""
	I1208 00:32:06.026753  826329 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 00:32:06.026902  826329 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 00:32:06.026919  826329 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 00:32:06.027125  826329 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 00:32:06.027138  826329 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 00:32:06.027163  826329 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 00:32:06.027177  826329 command_runner.go:130] > # will be added.
	I1208 00:32:06.027277  826329 command_runner.go:130] > # default_capabilities = [
	I1208 00:32:06.027581  826329 command_runner.go:130] > # 	"CHOWN",
	I1208 00:32:06.027682  826329 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 00:32:06.027912  826329 command_runner.go:130] > # 	"FSETID",
	I1208 00:32:06.028073  826329 command_runner.go:130] > # 	"FOWNER",
	I1208 00:32:06.028166  826329 command_runner.go:130] > # 	"SETGID",
	I1208 00:32:06.028351  826329 command_runner.go:130] > # 	"SETUID",
	I1208 00:32:06.028526  826329 command_runner.go:130] > # 	"SETPCAP",
	I1208 00:32:06.028680  826329 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 00:32:06.028802  826329 command_runner.go:130] > # 	"KILL",
	I1208 00:32:06.028996  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029019  826329 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 00:32:06.029028  826329 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 00:32:06.029301  826329 command_runner.go:130] > # add_inheritable_capabilities = false
	I1208 00:32:06.029326  826329 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 00:32:06.029333  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029338  826329 command_runner.go:130] > default_sysctls = [
	I1208 00:32:06.029464  826329 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1208 00:32:06.029477  826329 command_runner.go:130] > ]
	I1208 00:32:06.029483  826329 command_runner.go:130] > # List of devices on the host that a
	I1208 00:32:06.029491  826329 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 00:32:06.029495  826329 command_runner.go:130] > # allowed_devices = [
	I1208 00:32:06.029499  826329 command_runner.go:130] > # 	"/dev/fuse",
	I1208 00:32:06.029507  826329 command_runner.go:130] > # 	"/dev/net/tun",
	I1208 00:32:06.029726  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029756  826329 command_runner.go:130] > # List of additional devices. specified as
	I1208 00:32:06.029769  826329 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 00:32:06.029775  826329 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 00:32:06.029782  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029898  826329 command_runner.go:130] > # additional_devices = [
	I1208 00:32:06.029911  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029918  826329 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 00:32:06.029922  826329 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 00:32:06.030014  826329 command_runner.go:130] > # 	"/etc/cdi",
	I1208 00:32:06.030033  826329 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 00:32:06.030037  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030045  826329 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 00:32:06.030051  826329 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 00:32:06.030058  826329 command_runner.go:130] > # Defaults to false.
	I1208 00:32:06.030179  826329 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 00:32:06.030194  826329 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 00:32:06.030201  826329 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 00:32:06.030206  826329 command_runner.go:130] > # hooks_dir = [
	I1208 00:32:06.030462  826329 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 00:32:06.030539  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030554  826329 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 00:32:06.030561  826329 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 00:32:06.030592  826329 command_runner.go:130] > # its default mounts from the following two files:
	I1208 00:32:06.030598  826329 command_runner.go:130] > #
	I1208 00:32:06.030608  826329 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 00:32:06.030631  826329 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 00:32:06.030642  826329 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 00:32:06.030646  826329 command_runner.go:130] > #
	I1208 00:32:06.030658  826329 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 00:32:06.030668  826329 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 00:32:06.030675  826329 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 00:32:06.030680  826329 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 00:32:06.030684  826329 command_runner.go:130] > #
	I1208 00:32:06.030688  826329 command_runner.go:130] > # default_mounts_file = ""
	I1208 00:32:06.030697  826329 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 00:32:06.030710  826329 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 00:32:06.030795  826329 command_runner.go:130] > # pids_limit = -1
	I1208 00:32:06.030811  826329 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 00:32:06.030858  826329 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 00:32:06.030867  826329 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 00:32:06.030881  826329 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 00:32:06.030886  826329 command_runner.go:130] > # log_size_max = -1
	I1208 00:32:06.030903  826329 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1208 00:32:06.031086  826329 command_runner.go:130] > # log_to_journald = false
	I1208 00:32:06.031102  826329 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 00:32:06.031167  826329 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 00:32:06.031181  826329 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 00:32:06.031241  826329 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 00:32:06.031258  826329 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 00:32:06.031327  826329 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 00:32:06.031335  826329 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 00:32:06.031339  826329 command_runner.go:130] > # read_only = false
	I1208 00:32:06.031345  826329 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 00:32:06.031377  826329 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 00:32:06.031383  826329 command_runner.go:130] > # live configuration reload.
	I1208 00:32:06.031388  826329 command_runner.go:130] > # log_level = "info"
	I1208 00:32:06.031397  826329 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 00:32:06.031408  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.031412  826329 command_runner.go:130] > # log_filter = ""
	I1208 00:32:06.031419  826329 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031430  826329 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 00:32:06.031434  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031452  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031456  826329 command_runner.go:130] > # uid_mappings = ""
	I1208 00:32:06.031462  826329 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031468  826329 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 00:32:06.031472  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031482  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031553  826329 command_runner.go:130] > # gid_mappings = ""
	I1208 00:32:06.031569  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 00:32:06.031632  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031648  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031656  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031742  826329 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 00:32:06.031759  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 00:32:06.031785  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031798  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031807  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.032017  826329 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 00:32:06.032056  826329 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 00:32:06.032071  826329 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 00:32:06.032077  826329 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 00:32:06.032099  826329 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 00:32:06.032106  826329 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 00:32:06.032112  826329 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 00:32:06.032205  826329 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 00:32:06.032267  826329 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 00:32:06.032278  826329 command_runner.go:130] > # drop_infra_ctr = true
	I1208 00:32:06.032285  826329 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 00:32:06.032292  826329 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 00:32:06.032307  826329 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 00:32:06.032340  826329 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 00:32:06.032356  826329 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1208 00:32:06.032371  826329 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1208 00:32:06.032378  826329 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1208 00:32:06.032384  826329 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1208 00:32:06.032394  826329 command_runner.go:130] > # shared_cpuset = ""
	I1208 00:32:06.032400  826329 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 00:32:06.032411  826329 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 00:32:06.032448  826329 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 00:32:06.032463  826329 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 00:32:06.032467  826329 command_runner.go:130] > # pinns_path = ""
	I1208 00:32:06.032473  826329 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1208 00:32:06.032479  826329 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1208 00:32:06.032487  826329 command_runner.go:130] > # enable_criu_support = true
	I1208 00:32:06.032493  826329 command_runner.go:130] > # Enable/disable the generation of the container,
	I1208 00:32:06.032500  826329 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1208 00:32:06.032732  826329 command_runner.go:130] > # enable_pod_events = false
	I1208 00:32:06.032748  826329 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 00:32:06.032827  826329 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1208 00:32:06.032846  826329 command_runner.go:130] > # default_runtime = "crun"
	I1208 00:32:06.032871  826329 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 00:32:06.032889  826329 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 00:32:06.032901  826329 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1208 00:32:06.032911  826329 command_runner.go:130] > # creation as a file is not desired either.
	I1208 00:32:06.032919  826329 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 00:32:06.032929  826329 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 00:32:06.032938  826329 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 00:32:06.032974  826329 command_runner.go:130] > # ]
	I1208 00:32:06.033041  826329 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 00:32:06.033057  826329 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 00:32:06.033064  826329 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1208 00:32:06.033070  826329 command_runner.go:130] > # Each entry in the table should follow the format:
	I1208 00:32:06.033073  826329 command_runner.go:130] > #
	I1208 00:32:06.033106  826329 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1208 00:32:06.033112  826329 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1208 00:32:06.033117  826329 command_runner.go:130] > # runtime_type = "oci"
	I1208 00:32:06.033192  826329 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1208 00:32:06.033209  826329 command_runner.go:130] > # inherit_default_runtime = false
	I1208 00:32:06.033214  826329 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1208 00:32:06.033219  826329 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1208 00:32:06.033225  826329 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1208 00:32:06.033228  826329 command_runner.go:130] > # monitor_env = []
	I1208 00:32:06.033233  826329 command_runner.go:130] > # privileged_without_host_devices = false
	I1208 00:32:06.033237  826329 command_runner.go:130] > # allowed_annotations = []
	I1208 00:32:06.033263  826329 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1208 00:32:06.033276  826329 command_runner.go:130] > # no_sync_log = false
	I1208 00:32:06.033282  826329 command_runner.go:130] > # default_annotations = {}
	I1208 00:32:06.033376  826329 command_runner.go:130] > # stream_websockets = false
	I1208 00:32:06.033384  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.033433  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.033444  826329 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1208 00:32:06.033456  826329 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1208 00:32:06.033467  826329 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 00:32:06.033474  826329 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 00:32:06.033477  826329 command_runner.go:130] > #   in $PATH.
	I1208 00:32:06.033483  826329 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1208 00:32:06.033489  826329 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 00:32:06.033495  826329 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1208 00:32:06.033504  826329 command_runner.go:130] > #   state.
	I1208 00:32:06.033518  826329 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 00:32:06.033528  826329 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 00:32:06.033535  826329 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1208 00:32:06.033547  826329 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1208 00:32:06.033552  826329 command_runner.go:130] > #   the values from the default runtime on load time.
	I1208 00:32:06.033558  826329 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 00:32:06.033563  826329 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 00:32:06.033604  826329 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 00:32:06.033610  826329 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 00:32:06.033615  826329 command_runner.go:130] > #   The currently recognized values are:
	I1208 00:32:06.033697  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 00:32:06.033736  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 00:32:06.033745  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 00:32:06.033760  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 00:32:06.033770  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 00:32:06.033787  826329 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 00:32:06.033799  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1208 00:32:06.033811  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1208 00:32:06.033818  826329 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 00:32:06.033824  826329 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1208 00:32:06.033832  826329 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1208 00:32:06.033842  826329 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1208 00:32:06.033851  826329 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1208 00:32:06.033863  826329 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1208 00:32:06.033869  826329 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1208 00:32:06.033883  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1208 00:32:06.033892  826329 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1208 00:32:06.033896  826329 command_runner.go:130] > #   deprecated option "conmon".
	I1208 00:32:06.033903  826329 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1208 00:32:06.033908  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1208 00:32:06.033916  826329 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1208 00:32:06.033925  826329 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 00:32:06.033933  826329 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1208 00:32:06.033944  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1208 00:32:06.033955  826329 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1208 00:32:06.033959  826329 command_runner.go:130] > #   conmon-rs by using:
	I1208 00:32:06.033976  826329 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1208 00:32:06.033990  826329 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1208 00:32:06.033998  826329 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1208 00:32:06.034005  826329 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1208 00:32:06.034012  826329 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1208 00:32:06.034036  826329 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1208 00:32:06.034044  826329 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1208 00:32:06.034064  826329 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1208 00:32:06.034074  826329 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1208 00:32:06.034087  826329 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1208 00:32:06.034557  826329 command_runner.go:130] > #   when a machine crash happens.
	I1208 00:32:06.034567  826329 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1208 00:32:06.034582  826329 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1208 00:32:06.034589  826329 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1208 00:32:06.034594  826329 command_runner.go:130] > #   seccomp profile for the runtime.
	I1208 00:32:06.034680  826329 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1208 00:32:06.034713  826329 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1208 00:32:06.034720  826329 command_runner.go:130] > #
	I1208 00:32:06.034732  826329 command_runner.go:130] > # Using the seccomp notifier feature:
	I1208 00:32:06.034735  826329 command_runner.go:130] > #
	I1208 00:32:06.034742  826329 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1208 00:32:06.034749  826329 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1208 00:32:06.034762  826329 command_runner.go:130] > #
	I1208 00:32:06.034769  826329 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1208 00:32:06.034785  826329 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1208 00:32:06.034788  826329 command_runner.go:130] > #
	I1208 00:32:06.034795  826329 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1208 00:32:06.034799  826329 command_runner.go:130] > # feature.
	I1208 00:32:06.034802  826329 command_runner.go:130] > #
	I1208 00:32:06.034808  826329 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1208 00:32:06.034819  826329 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1208 00:32:06.034825  826329 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1208 00:32:06.034837  826329 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1208 00:32:06.034858  826329 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1208 00:32:06.034861  826329 command_runner.go:130] > #
	I1208 00:32:06.034867  826329 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1208 00:32:06.034878  826329 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1208 00:32:06.034881  826329 command_runner.go:130] > #
	I1208 00:32:06.034887  826329 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1208 00:32:06.034897  826329 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1208 00:32:06.034900  826329 command_runner.go:130] > #
	I1208 00:32:06.034906  826329 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1208 00:32:06.034916  826329 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1208 00:32:06.034920  826329 command_runner.go:130] > # limitation.
	I1208 00:32:06.034927  826329 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1208 00:32:06.034932  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1208 00:32:06.034939  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.034944  826329 command_runner.go:130] > runtime_root = "/run/crun"
	I1208 00:32:06.034954  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.034958  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.034962  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.034972  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.034976  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.034981  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.034990  826329 command_runner.go:130] > allowed_annotations = [
	I1208 00:32:06.034999  826329 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1208 00:32:06.035002  826329 command_runner.go:130] > ]
	I1208 00:32:06.035007  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035011  826329 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 00:32:06.035016  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1208 00:32:06.035020  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.035024  826329 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 00:32:06.035034  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.035038  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.035042  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.035046  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.035050  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.035054  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.035145  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035184  826329 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 00:32:06.035191  826329 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 00:32:06.035197  826329 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 00:32:06.035205  826329 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 00:32:06.035222  826329 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1208 00:32:06.035233  826329 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1208 00:32:06.035249  826329 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1208 00:32:06.035255  826329 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 00:32:06.035265  826329 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 00:32:06.035274  826329 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 00:32:06.035280  826329 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 00:32:06.035291  826329 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 00:32:06.035294  826329 command_runner.go:130] > # Example:
	I1208 00:32:06.035299  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 00:32:06.035309  826329 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 00:32:06.035318  826329 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 00:32:06.035324  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 00:32:06.035413  826329 command_runner.go:130] > # cpuset = "0-1"
	I1208 00:32:06.035447  826329 command_runner.go:130] > # cpushares = "5"
	I1208 00:32:06.035460  826329 command_runner.go:130] > # cpuquota = "1000"
	I1208 00:32:06.035471  826329 command_runner.go:130] > # cpuperiod = "100000"
	I1208 00:32:06.035475  826329 command_runner.go:130] > # cpulimit = "35"
	I1208 00:32:06.035479  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.035483  826329 command_runner.go:130] > # The workload name is workload-type.
	I1208 00:32:06.035497  826329 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 00:32:06.035502  826329 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 00:32:06.035540  826329 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 00:32:06.035556  826329 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 00:32:06.035563  826329 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 00:32:06.035576  826329 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1208 00:32:06.035584  826329 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1208 00:32:06.035592  826329 command_runner.go:130] > # Default value is set to true
	I1208 00:32:06.035597  826329 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1208 00:32:06.035603  826329 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1208 00:32:06.035607  826329 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1208 00:32:06.035703  826329 command_runner.go:130] > # Default value is set to 'false'
	I1208 00:32:06.035729  826329 command_runner.go:130] > # disable_hostport_mapping = false
	I1208 00:32:06.035736  826329 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1208 00:32:06.035751  826329 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1208 00:32:06.035755  826329 command_runner.go:130] > # timezone = ""
	I1208 00:32:06.035762  826329 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 00:32:06.035769  826329 command_runner.go:130] > #
	I1208 00:32:06.035775  826329 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 00:32:06.035782  826329 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1208 00:32:06.035785  826329 command_runner.go:130] > [crio.image]
	I1208 00:32:06.035791  826329 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 00:32:06.035796  826329 command_runner.go:130] > # default_transport = "docker://"
	I1208 00:32:06.035802  826329 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 00:32:06.035813  826329 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035818  826329 command_runner.go:130] > # global_auth_file = ""
	I1208 00:32:06.035823  826329 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 00:32:06.035833  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035852  826329 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1208 00:32:06.035863  826329 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 00:32:06.035874  826329 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035950  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035964  826329 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 00:32:06.035972  826329 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 00:32:06.035989  826329 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 00:32:06.035998  826329 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 00:32:06.036009  826329 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 00:32:06.036013  826329 command_runner.go:130] > # pause_command = "/pause"
	I1208 00:32:06.036019  826329 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1208 00:32:06.036030  826329 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1208 00:32:06.036036  826329 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1208 00:32:06.036043  826329 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1208 00:32:06.036052  826329 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1208 00:32:06.036058  826329 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1208 00:32:06.036062  826329 command_runner.go:130] > # pinned_images = [
	I1208 00:32:06.036065  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036071  826329 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 00:32:06.036077  826329 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 00:32:06.036087  826329 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 00:32:06.036093  826329 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 00:32:06.036104  826329 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 00:32:06.036109  826329 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1208 00:32:06.036115  826329 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1208 00:32:06.036126  826329 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1208 00:32:06.036133  826329 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1208 00:32:06.036139  826329 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1208 00:32:06.036145  826329 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1208 00:32:06.036150  826329 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1208 00:32:06.036160  826329 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 00:32:06.036167  826329 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 00:32:06.036172  826329 command_runner.go:130] > # changing them here.
	I1208 00:32:06.036184  826329 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1208 00:32:06.036193  826329 command_runner.go:130] > # insecure_registries = [
	I1208 00:32:06.036196  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036300  826329 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 00:32:06.036317  826329 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 00:32:06.036326  826329 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 00:32:06.036331  826329 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 00:32:06.036335  826329 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 00:32:06.036342  826329 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1208 00:32:06.036353  826329 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1208 00:32:06.036358  826329 command_runner.go:130] > # auto_reload_registries = false
	I1208 00:32:06.036365  826329 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1208 00:32:06.036377  826329 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1208 00:32:06.036388  826329 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1208 00:32:06.036393  826329 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1208 00:32:06.036398  826329 command_runner.go:130] > # The mode of short name resolution.
	I1208 00:32:06.036404  826329 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1208 00:32:06.036418  826329 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1208 00:32:06.036424  826329 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1208 00:32:06.036433  826329 command_runner.go:130] > # short_name_mode = "enforcing"
	I1208 00:32:06.036439  826329 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1208 00:32:06.036446  826329 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1208 00:32:06.036457  826329 command_runner.go:130] > # oci_artifact_mount_support = true
	I1208 00:32:06.036463  826329 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 00:32:06.036466  826329 command_runner.go:130] > # CNI plugins.
	I1208 00:32:06.036469  826329 command_runner.go:130] > [crio.network]
	I1208 00:32:06.036476  826329 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 00:32:06.036481  826329 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 00:32:06.036485  826329 command_runner.go:130] > # cni_default_network = ""
	I1208 00:32:06.036496  826329 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 00:32:06.036501  826329 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 00:32:06.036506  826329 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 00:32:06.036515  826329 command_runner.go:130] > # plugin_dirs = [
	I1208 00:32:06.036642  826329 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 00:32:06.036668  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036675  826329 command_runner.go:130] > # List of included pod metrics.
	I1208 00:32:06.036679  826329 command_runner.go:130] > # included_pod_metrics = [
	I1208 00:32:06.036860  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036921  826329 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 00:32:06.036927  826329 command_runner.go:130] > [crio.metrics]
	I1208 00:32:06.036932  826329 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 00:32:06.036937  826329 command_runner.go:130] > # enable_metrics = false
	I1208 00:32:06.036942  826329 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 00:32:06.036953  826329 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 00:32:06.036960  826329 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 00:32:06.036994  826329 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 00:32:06.037043  826329 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 00:32:06.037079  826329 command_runner.go:130] > # metrics_collectors = [
	I1208 00:32:06.037090  826329 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 00:32:06.037155  826329 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1208 00:32:06.037178  826329 command_runner.go:130] > # 	"containers_oom_total",
	I1208 00:32:06.037336  826329 command_runner.go:130] > # 	"processes_defunct",
	I1208 00:32:06.037413  826329 command_runner.go:130] > # 	"operations_total",
	I1208 00:32:06.037662  826329 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 00:32:06.037734  826329 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 00:32:06.037748  826329 command_runner.go:130] > # 	"operations_errors_total",
	I1208 00:32:06.037753  826329 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 00:32:06.037772  826329 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 00:32:06.037792  826329 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 00:32:06.037922  826329 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 00:32:06.037987  826329 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 00:32:06.038011  826329 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 00:32:06.038021  826329 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1208 00:32:06.038045  826329 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1208 00:32:06.038193  826329 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1208 00:32:06.038255  826329 command_runner.go:130] > # ]
	I1208 00:32:06.038268  826329 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1208 00:32:06.038283  826329 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1208 00:32:06.038321  826329 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 00:32:06.038335  826329 command_runner.go:130] > # metrics_port = 9090
	I1208 00:32:06.038341  826329 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 00:32:06.038408  826329 command_runner.go:130] > # metrics_socket = ""
	I1208 00:32:06.038423  826329 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 00:32:06.038430  826329 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 00:32:06.038449  826329 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 00:32:06.038461  826329 command_runner.go:130] > # certificate on any modification event.
	I1208 00:32:06.038588  826329 command_runner.go:130] > # metrics_cert = ""
	I1208 00:32:06.038614  826329 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 00:32:06.038622  826329 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 00:32:06.038740  826329 command_runner.go:130] > # metrics_key = ""
	I1208 00:32:06.038809  826329 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 00:32:06.038823  826329 command_runner.go:130] > [crio.tracing]
	I1208 00:32:06.038829  826329 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 00:32:06.038833  826329 command_runner.go:130] > # enable_tracing = false
	I1208 00:32:06.038876  826329 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 00:32:06.038890  826329 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1208 00:32:06.038899  826329 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1208 00:32:06.038973  826329 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 00:32:06.038987  826329 command_runner.go:130] > # CRI-O NRI configuration.
	I1208 00:32:06.038992  826329 command_runner.go:130] > [crio.nri]
	I1208 00:32:06.039013  826329 command_runner.go:130] > # Globally enable or disable NRI.
	I1208 00:32:06.039024  826329 command_runner.go:130] > # enable_nri = true
	I1208 00:32:06.039029  826329 command_runner.go:130] > # NRI socket to listen on.
	I1208 00:32:06.039033  826329 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1208 00:32:06.039044  826329 command_runner.go:130] > # NRI plugin directory to use.
	I1208 00:32:06.039198  826329 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1208 00:32:06.039225  826329 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1208 00:32:06.039233  826329 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1208 00:32:06.039239  826329 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1208 00:32:06.039363  826329 command_runner.go:130] > # nri_disable_connections = false
	I1208 00:32:06.039381  826329 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1208 00:32:06.039476  826329 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1208 00:32:06.039494  826329 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1208 00:32:06.039499  826329 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1208 00:32:06.039504  826329 command_runner.go:130] > # NRI default validator configuration.
	I1208 00:32:06.039511  826329 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1208 00:32:06.039518  826329 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1208 00:32:06.039557  826329 command_runner.go:130] > # can be restricted/rejected:
	I1208 00:32:06.039568  826329 command_runner.go:130] > # - OCI hook injection
	I1208 00:32:06.039573  826329 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1208 00:32:06.039586  826329 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1208 00:32:06.039595  826329 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1208 00:32:06.039600  826329 command_runner.go:130] > # - adjustment of linux namespaces
	I1208 00:32:06.039606  826329 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1208 00:32:06.039685  826329 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1208 00:32:06.039812  826329 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1208 00:32:06.039825  826329 command_runner.go:130] > #
	I1208 00:32:06.039830  826329 command_runner.go:130] > # [crio.nri.default_validator]
	I1208 00:32:06.039911  826329 command_runner.go:130] > # nri_enable_default_validator = false
	I1208 00:32:06.039939  826329 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1208 00:32:06.039947  826329 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1208 00:32:06.039959  826329 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1208 00:32:06.039966  826329 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1208 00:32:06.039971  826329 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1208 00:32:06.039975  826329 command_runner.go:130] > # nri_validator_required_plugins = [
	I1208 00:32:06.039978  826329 command_runner.go:130] > # ]
	I1208 00:32:06.039984  826329 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1208 00:32:06.039994  826329 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 00:32:06.040003  826329 command_runner.go:130] > [crio.stats]
	I1208 00:32:06.040013  826329 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 00:32:06.040019  826329 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 00:32:06.040027  826329 command_runner.go:130] > # stats_collection_period = 0
	I1208 00:32:06.040033  826329 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1208 00:32:06.040043  826329 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1208 00:32:06.040047  826329 command_runner.go:130] > # collection_period = 0
	I1208 00:32:06.041802  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994368044Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1208 00:32:06.041819  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994407331Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1208 00:32:06.041829  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994434752Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1208 00:32:06.041836  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994457826Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1208 00:32:06.041847  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994536038Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:06.041867  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994955873Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1208 00:32:06.041895  826329 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 00:32:06.042057  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:06.042089  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:06.042117  826329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:32:06.042147  826329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:32:06.042284  826329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:32:06.042367  826329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:32:06.049993  826329 command_runner.go:130] > kubeadm
	I1208 00:32:06.050024  826329 command_runner.go:130] > kubectl
	I1208 00:32:06.050029  826329 command_runner.go:130] > kubelet
	I1208 00:32:06.051018  826329 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:32:06.051091  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:32:06.059413  826329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:32:06.073688  826329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:32:06.087599  826329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:32:06.100920  826329 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:32:06.104607  826329 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:32:06.104862  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:06.223310  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:06.506702  826329 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:32:06.506774  826329 certs.go:195] generating shared ca certs ...
	I1208 00:32:06.506805  826329 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:06.507033  826329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:32:06.507124  826329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:32:06.507152  826329 certs.go:257] generating profile certs ...
	I1208 00:32:06.507310  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:32:06.507422  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:32:06.507510  826329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:32:06.507537  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:32:06.507566  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:32:06.507605  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:32:06.507636  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:32:06.507680  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:32:06.507713  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:32:06.507755  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:32:06.507788  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:32:06.507873  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:32:06.507940  826329 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:32:06.507964  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:32:06.508024  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:32:06.508086  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:32:06.508156  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:32:06.508255  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:06.508336  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.508374  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.508417  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.509152  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:32:06.534629  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:32:06.554458  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:32:06.573968  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:32:06.590997  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:32:06.608508  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:32:06.625424  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:32:06.642336  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:32:06.660002  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:32:06.677652  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:32:06.695647  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:32:06.713354  826329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:32:06.725836  826329 ssh_runner.go:195] Run: openssl version
	I1208 00:32:06.731951  826329 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:32:06.732096  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.739312  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:32:06.746650  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750259  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750312  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750360  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.790520  826329 command_runner.go:130] > 51391683
	I1208 00:32:06.791045  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:32:06.798345  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.805645  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:32:06.813042  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816781  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816807  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816859  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.857524  826329 command_runner.go:130] > 3ec20f2e
	I1208 00:32:06.857994  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:32:06.865262  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.872409  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:32:06.879529  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883021  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883115  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883198  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.923843  826329 command_runner.go:130] > b5213941
	I1208 00:32:06.924322  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:32:06.931656  826329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935287  826329 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935325  826329 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:32:06.935332  826329 command_runner.go:130] > Device: 259,1	Inode: 1322385     Links: 1
	I1208 00:32:06.935354  826329 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:06.935369  826329 command_runner.go:130] > Access: 2025-12-08 00:27:59.408752113 +0000
	I1208 00:32:06.935374  826329 command_runner.go:130] > Modify: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935396  826329 command_runner.go:130] > Change: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935407  826329 command_runner.go:130] >  Birth: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935530  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:32:06.975831  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:06.976261  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:32:07.017790  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.017978  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:32:07.058488  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.058966  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:32:07.099457  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.099917  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:32:07.141471  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.141903  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:32:07.182188  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.182659  826329 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:07.182760  826329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:32:07.182825  826329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:32:07.209144  826329 cri.go:89] found id: ""
	I1208 00:32:07.209214  826329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:32:07.216134  826329 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:32:07.216154  826329 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:32:07.216162  826329 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:32:07.217097  826329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:32:07.217114  826329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:32:07.217178  826329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:32:07.224428  826329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:32:07.224856  826329 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.224961  826329 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "functional-525396" cluster setting kubeconfig missing "functional-525396" context setting]
	I1208 00:32:07.225241  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.225667  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.225818  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.226341  826329 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:32:07.226363  826329 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:32:07.226369  826329 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:32:07.226375  826329 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:32:07.226381  826329 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:32:07.226674  826329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:32:07.226772  826329 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:32:07.234310  826329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:32:07.234378  826329 kubeadm.go:602] duration metric: took 17.25872ms to restartPrimaryControlPlane
	I1208 00:32:07.234395  826329 kubeadm.go:403] duration metric: took 51.743543ms to StartCluster
	I1208 00:32:07.234412  826329 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.234484  826329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.235129  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.235358  826329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:32:07.235583  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:07.235658  826329 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:32:07.235740  826329 addons.go:70] Setting storage-provisioner=true in profile "functional-525396"
	I1208 00:32:07.235754  826329 addons.go:239] Setting addon storage-provisioner=true in "functional-525396"
	I1208 00:32:07.235778  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.236237  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.236576  826329 addons.go:70] Setting default-storageclass=true in profile "functional-525396"
	I1208 00:32:07.236601  826329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-525396"
	I1208 00:32:07.236875  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.242309  826329 out.go:179] * Verifying Kubernetes components...
	I1208 00:32:07.245184  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:07.271460  826329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:32:07.274400  826329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.274424  826329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:32:07.274492  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.276071  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.276241  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.276512  826329 addons.go:239] Setting addon default-storageclass=true in "functional-525396"
	I1208 00:32:07.276540  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.276944  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.314823  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.318477  826329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:07.318497  826329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:32:07.318558  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.352646  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.447557  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:07.488721  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.519084  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.257520  826329 node_ready.go:35] waiting up to 6m0s for node "functional-525396" to be "Ready" ...
	I1208 00:32:08.257618  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257654  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257688  826329 retry.go:31] will retry after 154.925821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257654  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.257704  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257722  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257734  826329 retry.go:31] will retry after 240.899479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257750  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.258076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.413579  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.477856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.477934  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.477962  826329 retry.go:31] will retry after 471.79599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.499019  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.559244  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.559341  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.559365  826329 retry.go:31] will retry after 419.613997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.758693  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.758772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.759084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.950598  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.979140  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.022887  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.022933  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.022979  826329 retry.go:31] will retry after 789.955074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083550  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.083656  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083684  826329 retry.go:31] will retry after 584.522236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.668477  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.723720  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.727856  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.727932  826329 retry.go:31] will retry after 996.136704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.757987  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.758082  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.813684  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:09.865943  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.869391  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.869422  826329 retry.go:31] will retry after 1.082403251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.257910  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:10.258329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.724942  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:10.758490  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.758896  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.786956  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:10.787023  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.787045  826329 retry.go:31] will retry after 1.653307887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.952461  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:11.017630  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:11.017682  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.017706  826329 retry.go:31] will retry after 1.450018323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.257721  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.258081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.757826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.757911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.258016  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.258092  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.258398  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:12.258449  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.440941  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:12.468519  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:12.523147  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.523192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.523212  826329 retry.go:31] will retry after 1.808868247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537050  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.537096  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537115  826329 retry.go:31] will retry after 1.005297336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.758616  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.758689  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.758985  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.257733  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.542714  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:13.607721  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:13.607772  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.607793  826329 retry.go:31] will retry after 2.59048957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.758025  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.758103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.257759  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.257837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.332402  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:14.393856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:14.393908  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.393927  826329 retry.go:31] will retry after 3.003957784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.758447  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.758779  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:14.758833  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:15.258432  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.258504  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.258873  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.758697  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.758770  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.198619  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:16.257994  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.258110  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.258333  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.261663  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:16.261706  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.261724  826329 retry.go:31] will retry after 3.921003057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.758355  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.758442  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.758740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.258595  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.258667  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.259014  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:17.259070  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:17.398537  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:17.459046  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:17.459087  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.459108  826329 retry.go:31] will retry after 6.352068949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.758636  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.758713  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.759027  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.757758  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.758113  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.258205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.757895  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:19.758338  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:20.183008  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:20.244376  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.244427  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.244447  826329 retry.go:31] will retry after 4.642616038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.258603  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.258946  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.757858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.758256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.757997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.758369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:22.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.757950  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.758369  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.257963  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.258271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.758124  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.758456  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.758513  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.811708  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:23.877239  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:23.877286  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:23.877305  826329 retry.go:31] will retry after 3.991513365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.257726  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.757814  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.757890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.887652  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:24.946807  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:24.946870  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.946894  826329 retry.go:31] will retry after 6.868435312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:25.258372  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.258452  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.258751  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.758655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.759159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.759287  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:26.257937  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.258011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.258320  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.757849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.758164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.258591  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.758609  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.869339  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:27.929619  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:27.929669  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:27.929689  826329 retry.go:31] will retry after 5.640751927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:28.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.258197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:28.258246  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.757900  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.257906  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.758680  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.758746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.759010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:30.759051  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:31.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.258120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.757934  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.815479  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:31.877679  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:31.877725  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:31.877744  826329 retry.go:31] will retry after 9.288265427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:32.258204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.258274  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.258579  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.758594  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.758959  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.257805  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.258256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:33.258316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:33.570705  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:33.628260  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:33.631756  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.631797  826329 retry.go:31] will retry after 7.380803559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.758003  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.758091  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.257826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.257908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.757933  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.757723  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:35.758156  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:36.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.257953  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.258310  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.758204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.758282  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.758636  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:37.758697  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:38.258444  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.258520  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.258964  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.758657  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.758988  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.258591  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.259009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.757689  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.757764  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.758032  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.257724  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.257806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.258168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:40.258225  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:40.757812  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.757892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.013670  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:41.072281  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.076192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.076223  826329 retry.go:31] will retry after 30.64284814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.166454  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:41.227404  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.227446  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.227466  826329 retry.go:31] will retry after 28.006603896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.258583  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.258655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.758793  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.758886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.759193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.257895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.258236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:42.258293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.758154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.758523  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.258386  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.258459  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.258782  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.758542  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.758614  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.758961  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.258683  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.258759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:44.259091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.757800  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.758206  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.258097  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.259164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.757651  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.757746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.758010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.257735  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.257815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.258117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.757885  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.757969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.758288  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.758347  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:47.258326  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.258400  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.258685  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.758684  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.758763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.759114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.257709  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.757752  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.758123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:49.258218  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.757765  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.758188  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.258204  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.258253  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.757903  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.757978  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.758301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.757965  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.758392  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.758279  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:54.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.257882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.757818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.757897  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.258277  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.757925  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.758403  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.258035  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.258362  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.258678  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.258763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.259088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.757900  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.757974  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.258215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:58.258269  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.758311  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.257792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.258100  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.757787  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.257846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.758031  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.758108  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.757962  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.257983  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.258055  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.258387  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.258456  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:02.757985  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.758059  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.258055  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.258125  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.258438  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.757882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.257989  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.258481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:04.758118  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.758201  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.758485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.258270  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.758448  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.758527  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.758934  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.257684  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.257772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.258049  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:06.758206  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.258726  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.258824  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.259215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.758011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.758271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.257849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:08.758228  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:09.234960  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:09.258398  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.258467  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.258726  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.299771  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:09.299811  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.299830  826329 retry.go:31] will retry after 22.917133282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.758561  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.758640  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.758995  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.258770  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.258868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.259197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.757838  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.758190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.257813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:11.258179  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:11.719678  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:11.758124  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.758203  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.758476  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.779600  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:11.783324  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:11.783357  826329 retry.go:31] will retry after 27.574784486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:12.257740  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.258104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.258219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:13.258272  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:13.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.757988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.257958  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.258037  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.258315  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:15.258360  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:15.757919  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.757879  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.257963  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.258036  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.258357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:17.258414  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.758272  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.758354  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.758668  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.258406  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.258487  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.258798  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.758471  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.758544  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.758891  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.258691  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.258772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.259134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.259190  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.757739  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.758088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.757870  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.757943  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.758290  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.757993  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.257852  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.258182  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:24.258220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.757878  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.758349  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.258345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.257811  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.258284  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.758040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.258252  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.258330  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.258588  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.758645  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.758735  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.759079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.758067  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.758108  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.757789  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.257875  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.257941  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.258210  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.757889  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.758308  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.257774  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.757714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.757784  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.758087  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.217681  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:32.258110  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.258497  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.272413  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:32.276021  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.276065  826329 retry.go:31] will retry after 31.830018043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.258151  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.258517  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.758451  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.258598  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.259035  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.758635  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.758714  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.759056  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.257714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.258111  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.758267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.257939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.757891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.258214  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.258289  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.258578  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:37.258623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.758354  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.758421  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.758674  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.258403  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.258497  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.258867  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.758486  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.758558  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.758906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.258694  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.258758  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.259030  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.259072  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.358376  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:39.412374  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416050  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416143  826329 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:33:39.758638  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.758720  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.759108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.757846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.757931  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.257809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.757977  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.758050  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.758393  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:42.258098  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.258182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.258488  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.758485  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.758557  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.758915  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.258576  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.258649  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.258992  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.757700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.757773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.758038  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.257757  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.258132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:44.258184  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.757809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.757999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.758336  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.258084  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.258468  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:46.258519  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.758126  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.758195  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.758462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.258480  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.258906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.758307  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.257842  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.758219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.758291  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:49.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.258184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.757922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.757790  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.257971  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.258282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:51.258346  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.757834  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.757908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.758182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.758452  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.258459  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.258900  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:53.258955  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.758700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.758780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.759083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.258123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.758170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:55.758182  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:56.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.757939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.758018  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.758340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.258337  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.258409  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.258677  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.758592  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:57.759063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:58.257674  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.257773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.757693  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.757771  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.758081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.258187  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.758199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.265698  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.265780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.266096  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:00.266143  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:00.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.757872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.258053  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.257892  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.258340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.758185  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.758273  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.758590  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:02.758643  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:03.258621  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.258702  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.757895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.758191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.106865  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:34:04.166273  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166323  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166403  826329 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:34:04.169502  826329 out.go:179] * Enabled addons: 
	I1208 00:34:04.171536  826329 addons.go:530] duration metric: took 1m56.935875389s for enable addons: enabled=[]
	I1208 00:34:04.258604  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.258682  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.259013  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.758662  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.758731  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.759011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:04.759062  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:05.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.758048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.758370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.257730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.258101  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.758131  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.758204  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.758570  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.258500  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.258586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.258950  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:07.259055  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:07.757997  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.758357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.257713  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.257788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.258063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:09.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:10.257921  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.258346  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.757735  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.757804  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.758062  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.757910  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:11.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:12.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.258391  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.757907  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.757979  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.258000  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.258079  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.757976  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.758046  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.758318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:14.258216  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:14.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.758229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.257940  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.258013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.258338  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:16.258395  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:16.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.758127  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.258701  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.258775  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.757896  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.758282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.257973  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.258048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.757762  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:18.758243  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.258352  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.758033  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.758409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.757890  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.757981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.758323  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:20.758384  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:21.257944  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.258010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.758322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.257850  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.257925  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.258270  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.758019  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.758365  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:22.758408  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:23.258071  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.258151  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.258491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.758281  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.758363  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.758707  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.258477  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.258561  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.759183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.759247  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:25.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.258000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.757806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.758120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.258248  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.757971  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.758380  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.258327  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.258401  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.258666  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:27.258716  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.758723  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.758798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.759103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.258027  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.258370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.758085  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.758508  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:29.758566  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:30.258264  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.258340  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.258608  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.758360  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.758437  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.758793  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.258627  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.258701  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.259047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.757815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.758076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.257780  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:32.258235  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:32.758097  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.758176  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.258283  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.258362  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.258621  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.758421  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.758509  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.758874  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.258773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.259148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:34.259210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.757843  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.757921  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.757995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.758360  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.257977  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.258049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.757866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.758233  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:37.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.257964  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.258296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.758129  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.758200  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.758490  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.258191  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.258269  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.758454  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.758534  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.758898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.758959  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:39.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.258627  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.258916  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.758708  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.759139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.257796  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.757783  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.758212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:41.258249  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:41.757913  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.758308  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.758011  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.758449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.258150  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.258227  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.258566  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:43.258632  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.758358  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.758430  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.758722  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.258546  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.259073  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.757871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.257935  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.258485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.758673  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.758756  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.759202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:46.257864  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.257946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.258291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.758013  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.258513  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.258598  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.259004  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.757974  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.758047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.257839  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.258125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:48.258175  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.757743  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.757816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.758138  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.257906  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.758137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.257875  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:50.258267  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.757934  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.758014  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.758361  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.258044  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.258119  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.258431  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.758821  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.758917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.759213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.757986  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.758060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.758375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:52.758428  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:53.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.758227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.757810  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.757886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.257839  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.257917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:55.258313  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:55.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.757796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.757854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.758141  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:57.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:57.758246  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.758647  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.258478  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.258560  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.258910  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.257905  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.258259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.758063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.758436  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:59.758494  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:00.270583  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.271106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.271544  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.758373  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.758448  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.758792  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.258597  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.259052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.257942  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.258019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.258319  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:02.258369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:02.758254  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.758335  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.758657  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.258485  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.258576  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.258926  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.757769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.258084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:04.758220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:05.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.257988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.258274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.257890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.258218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.758268  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:07.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.258264  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.258524  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.758503  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.758579  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.758911  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.258711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.258788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.259165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.758114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:09.258314  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:09.757867  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.257728  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.758154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.257828  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.257901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:11.758292  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.758010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.758331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.257734  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.258128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.757740  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.758156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.257879  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.257958  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:14.258372  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:14.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.258226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.757850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:16.758262  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:17.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.758126  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.258225  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.757982  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.758084  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:18.758496  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:19.258078  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.258148  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.258462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.758152  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.257773  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.257847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.258174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.757731  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.758079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:21.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:21.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.758255  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.258007  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.258298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.757958  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.758029  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.257782  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.757721  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.757792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:23.758157  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.257916  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.757747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.757838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.257741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.258153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:25.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.257792  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.257867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.258190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.757716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.757791  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.758047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.257747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.257826  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.258159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.757938  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.758339  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:27.758399  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:28.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.257817  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.258135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.758185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.257754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.757884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.758247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.258359  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:30.258416  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:30.758069  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.758447  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.257716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.757859  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.258342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.758262  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.758582  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:32.758623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.258445  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.258519  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.258864  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.758759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.759120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.757780  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.257854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.258302  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:35.757946  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.758342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.258034  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.258106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.758092  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.758170  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.758498  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.258371  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.258441  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.258740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.258804  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:37.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.758737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.759093  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.758009  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.758085  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.758354  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.258253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.758008  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.758083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.758427  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:39.758481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.258151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.757846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.758147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.258244  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.757920  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.757992  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.758263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.257833  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.258385  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.258459  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.758115  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.758189  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.758495  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.258231  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.258593  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.758356  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.758433  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.758767  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.258451  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.258526  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.258817  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.258887  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:44.758589  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.758661  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.758935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.257830  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.757933  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.257995  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.258070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.258330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.757844  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.758227  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.757930  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.758268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.757753  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.258251  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.758020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.758330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.258077  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.258159  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.258484  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.757837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.757936  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.758281  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.758053  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.758433  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.258161  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.258558  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.758318  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.758393  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.758646  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.758686  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.258483  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.258562  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.258917  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.758694  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.758792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.759186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.257832  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.258147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.757780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.758109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.257711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.258202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.257884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.257966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.758093  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.258229  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.258576  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.258619  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:58.758339  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.758413  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.758719  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.258566  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.258656  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.259028  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.757811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.758074  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.258301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.757822  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.757896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:00.758231  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.257745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.258119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.757848  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.758161  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.257756  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.758045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:02.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:03.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.757799  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.758980  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:36:04.257702  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.258057  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.758149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.257856  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:05.757874  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.757952  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.758274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.257951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.258331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.758228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.258156  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.258257  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.258603  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:07.258657  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:07.758639  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.758722  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.759070  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.257829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.757812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.257802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.257878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.758023  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:09.758454  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:10.258096  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.258168  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.757867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.257926  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.258015  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.758043  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.758118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:12.258271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:12.758147  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.758239  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.758564  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.258372  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.258650  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.758403  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.758476  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.758795  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.258438  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.258516  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.258865  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:14.258923  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:14.758558  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.758632  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.257698  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.257781  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.258012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.258318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.757852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.758196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:16.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:17.257965  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.258040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.757949  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.257775  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.257850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.757883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.258195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:19.757899  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.257800  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.258270  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:21.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.258048  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.258121  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.757988  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.758096  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.758420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:23.258320  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:23.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.758051  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.758371  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.258081  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.258509  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.758321  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.758398  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.758744  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.258469  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.258537  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.258876  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:25.258924  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:25.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.758727  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.759090  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.757942  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.758194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.257841  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.257927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.758332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:27.758386  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:28.257969  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.258045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.758107  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.757822  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.758078  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.257824  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.257913  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.258331  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:30.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.757915  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.257869  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.257937  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.257781  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.757940  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:32.758305  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.258196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.758193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.257750  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.757815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.757887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.257918  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.257997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.258317  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.258379  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:35.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.757819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.758135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.257783  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.258193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.758166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.258659  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.258733  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.259083  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:37.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.758024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.758345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.757932  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.758013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.758289  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.757952  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:39.758433  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.257793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.258042  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.257744  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:42.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.758448  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.757926  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.258047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.258465  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:44.757755  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.757827  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.257829  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.257930  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.758253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.757828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:46.758229  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.257985  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.258332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.757967  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.758296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.257872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.757878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:48.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:49.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.757898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.758139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:51.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.257880  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.258144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:51.258193  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:51.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.758200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.257870  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.258287  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.758014  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.758414  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:53.258138  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.258234  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.258594  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:53.258654  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:53.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.257895  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.257969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.258267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.758150  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:55.758195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:56.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.258194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:56.757733  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.758064  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.258687  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.258769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.259122  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.757909  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.757984  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:57.758349  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:58.257827  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.257904  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:58.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.758197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.257858  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.257940  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.758280  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:00.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.258083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.258409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:00.258457  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:00.758379  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.758466  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.758803  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.258644  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.258737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.259037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:02.758316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:03.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.258232  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:03.757961  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.758042  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.758415  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.258085  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.258154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.258494  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.758211  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.758302  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.758664  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:04.758720  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:05.258496  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.258572  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.258935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:05.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.757745  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.758009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.258149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.758260  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:07.258197  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.258266  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.258533  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:07.258574  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:07.758487  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.758564  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.758919  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.258731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.258806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.259157  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.757712  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.757783  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.758052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.758285  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:09.758354  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:10.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.257812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.258068  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:10.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.758172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.758165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:12.257867  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:12.258328  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:12.758227  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.758306  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.758623  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.258376  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.258454  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.258723  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.758551  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.758624  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.758979  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.757823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:14.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:15.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:15.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.758236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.257917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.258276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:16.758276  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:17.257980  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.258060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:17.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.758343  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.258231  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.757795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.757884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.758230  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:19.257736  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:19.258185  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:19.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.257828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.757722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.757789  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.758063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:21.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:21.258238  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:21.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.758000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.257738  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.257820  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.758012  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.758097  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.758430  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.257876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.258177  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.757901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:23.758293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:24.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:24.757779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.758189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.258103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:26.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.258263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:26.258318  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:26.757964  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.758030  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.758273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.258297  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.258369  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.258691  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.758719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.758793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.759134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.257821  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:28.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:29.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:29.757719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.757786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.758037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.258173  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.757761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.758153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:31.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.257787  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.258040  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:31.258078  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:31.757746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.757831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.257904  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.758153  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.758406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:33.257779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.258158  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:33.258205  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:33.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.757959  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.257990  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.258252  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.758130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.257853  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.258198  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:35.258259  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:35.757729  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.757808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.758125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.257840  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:37.258028  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.258098  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.258344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:37.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:37.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.758350  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.757892  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:39.758261  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:40.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.257976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.258247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:40.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.758250  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.757732  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.758046  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:42.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.258257  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:42.258317  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.758145  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.758527  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.258368  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.258629  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.758381  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.758456  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:44.258642  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.258728  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.259104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:44.259162  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:44.757666  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.757747  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.758033  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.258118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.258898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.758751  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.759069  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:46.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.258765  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.259139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:46.259195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:46.757764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.758163  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.258575  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.757955  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.758294  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.757898  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:48.758358  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:49.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.258126  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:49.757824  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.757899  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.258201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.757976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:51.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.257834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:51.258245  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:51.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.758176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.257907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.757998  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.758067  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.758400  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.257761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.257831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.258156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.758051  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:53.758091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:54.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:54.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.258107  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.757840  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.758276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:55.758329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:56.257991  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.258063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:56.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.758080  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.257909  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.258228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.757928  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.758314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:57.758371  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:58.257725  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:58.757817  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.758235  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.257927  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.258328  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.757914  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:00.257912  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.258367  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:00.258421  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:00.758080  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.758156  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.758491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.258328  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.258416  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.258737  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.758951  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.257691  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.257768  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.258118  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:02.758341  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:03.258024  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.258103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.258449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:03.758162  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.758236  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.758778  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.258999  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.757698  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:05.257820  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:05.258295  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:05.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.257819  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.757775  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:07.262532  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.262623  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.263011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:07.263063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:07.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:08.257967  826329 node_ready.go:38] duration metric: took 6m0.00040399s for node "functional-525396" to be "Ready" ...
	I1208 00:38:08.261085  826329 out.go:203] 
	W1208 00:38:08.263874  826329 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:38:08.263896  826329 out.go:285] * 
	W1208 00:38:08.266040  826329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:38:08.269117  826329 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714298717Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714306414Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714311813Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714317286Z" level=info msg="RDT not available in the host system"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714334664Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715207312Z" level=info msg="Conmon does support the --sync option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715239272Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715254541Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715984501Z" level=info msg="Conmon does support the --sync option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716004177Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716135961Z" level=info msg="Updated default CNI network name to "
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716848903Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.717256324Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.717330195Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759894658Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759928759Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759969326Z" level=info msg="Create NRI interface"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760371471Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760401198Z" level=info msg="runtime interface created"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760416583Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760453055Z" level=info msg="runtime interface starting up..."
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.7604662Z" level=info msg="starting plugins..."
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760483997Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760558443Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:32:05 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:10.229585    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:10.230029    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:10.231736    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:10.232450    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:10.234031    8615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 23:24] overlayfs: idmapped layers are currently not supported
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:38:10 up  5:20,  0 user,  load average: 0.19, 0.24, 0.67
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:38:08 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:08 functional-525396 kubelet[8506]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:08 functional-525396 kubelet[8506]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:08 functional-525396 kubelet[8506]: E1208 00:38:08.068006    8506 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:08 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:08 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:08 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 08 00:38:08 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:08 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:08 functional-525396 kubelet[8512]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:08 functional-525396 kubelet[8512]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:08 functional-525396 kubelet[8512]: E1208 00:38:08.829240    8512 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:08 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:08 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:09 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 08 00:38:09 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:09 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:09 functional-525396 kubelet[8532]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:09 functional-525396 kubelet[8532]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:09 functional-525396 kubelet[8532]: E1208 00:38:09.565498    8532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:09 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:09 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:10 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 08 00:38:10 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:10 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (369.474461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-525396 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-525396 get po -A: exit status 1 (64.892849ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-525396 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-525396 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-525396 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (351.341627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 logs -n 25: (1.034238868s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /usr/share/ca-certificates/7918072.pem                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save kicbase/echo-server:functional-714395 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ update-context │ functional-714395 update-context --alsologtostderr -v=2                                                                                                   │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image rm kicbase/echo-server:functional-714395 --alsologtostderr                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image save --daemon kicbase/echo-server:functional-714395 --alsologtostderr                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format short --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format yaml --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format json --alsologtostderr                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls --format table --alsologtostderr                                                                                               │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh            │ functional-714395 ssh pgrep buildkitd                                                                                                                     │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image          │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                                    │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image          │ functional-714395 image ls                                                                                                                                │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete         │ -p functional-714395                                                                                                                                      │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start          │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-525396 --alsologtostderr -v=8                                                                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:32:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:32:02.748489  826329 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:32:02.748673  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748687  826329 out.go:374] Setting ErrFile to fd 2...
	I1208 00:32:02.748692  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748975  826329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:32:02.749379  826329 out.go:368] Setting JSON to false
	I1208 00:32:02.750240  826329 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18855,"bootTime":1765135068,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:32:02.750321  826329 start.go:143] virtualization:  
	I1208 00:32:02.755521  826329 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:32:02.759227  826329 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:32:02.759498  826329 notify.go:221] Checking for updates...
	I1208 00:32:02.765171  826329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:32:02.768668  826329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:02.771686  826329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:32:02.774728  826329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:32:02.777727  826329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:32:02.781794  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:02.781971  826329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:32:02.823053  826329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:32:02.823186  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.879429  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.869702269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.879546  826329 docker.go:319] overlay module found
	I1208 00:32:02.884410  826329 out.go:179] * Using the docker driver based on existing profile
	I1208 00:32:02.887311  826329 start.go:309] selected driver: docker
	I1208 00:32:02.887330  826329 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.887447  826329 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:32:02.887565  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.942385  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.932846048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.942810  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:02.942902  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:02.942960  826329 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.948301  826329 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:32:02.951106  826329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:32:02.954049  826329 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:32:02.956917  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:02.956968  826329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:32:02.956999  826329 cache.go:65] Caching tarball of preloaded images
	I1208 00:32:02.957004  826329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:32:02.957092  826329 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:32:02.957103  826329 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:32:02.957210  826329 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:32:02.976499  826329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:32:02.976524  826329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:32:02.976543  826329 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:32:02.976579  826329 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:32:02.976652  826329 start.go:364] duration metric: took 48.116µs to acquireMachinesLock for "functional-525396"
	I1208 00:32:02.976674  826329 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:32:02.976683  826329 fix.go:54] fixHost starting: 
	I1208 00:32:02.976940  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:02.996203  826329 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:32:02.996234  826329 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:32:02.999434  826329 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:32:02.999477  826329 machine.go:94] provisionDockerMachine start ...
	I1208 00:32:02.999559  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.021375  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.021746  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.021762  826329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:32:03.174523  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.174550  826329 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:32:03.174616  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.192743  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.193067  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.193084  826329 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:32:03.356577  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.356704  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.375055  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.375394  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.375419  826329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:32:03.529767  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:32:03.529793  826329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:32:03.529822  826329 ubuntu.go:190] setting up certificates
	I1208 00:32:03.529839  826329 provision.go:84] configureAuth start
	I1208 00:32:03.529901  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:03.552219  826329 provision.go:143] copyHostCerts
	I1208 00:32:03.552258  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552298  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:32:03.552310  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552383  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:32:03.552464  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552480  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:32:03.552484  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552511  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:32:03.552550  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552566  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:32:03.552570  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552592  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:32:03.552642  826329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:32:03.707027  826329 provision.go:177] copyRemoteCerts
	I1208 00:32:03.707105  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:32:03.707150  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.724035  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:03.830514  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:32:03.830586  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:32:03.848126  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:32:03.848238  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:32:03.865293  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:32:03.865368  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:32:03.882781  826329 provision.go:87] duration metric: took 352.917637ms to configureAuth
	I1208 00:32:03.882808  826329 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:32:03.883086  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:03.883204  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.900405  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.900722  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.900745  826329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:32:04.247102  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:32:04.247132  826329 machine.go:97] duration metric: took 1.247646186s to provisionDockerMachine
	I1208 00:32:04.247143  826329 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:32:04.247156  826329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:32:04.247233  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:32:04.247291  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.269420  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.374672  826329 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:32:04.377926  826329 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:32:04.377948  826329 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:32:04.377953  826329 command_runner.go:130] > VERSION_ID="12"
	I1208 00:32:04.377958  826329 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:32:04.377964  826329 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:32:04.377968  826329 command_runner.go:130] > ID=debian
	I1208 00:32:04.377973  826329 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:32:04.377998  826329 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:32:04.378009  826329 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:32:04.378363  826329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:32:04.378386  826329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:32:04.378397  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:32:04.378453  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:32:04.378535  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:32:04.378546  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 00:32:04.378621  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:32:04.378628  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> /etc/test/nested/copy/791807/hosts
	I1208 00:32:04.378672  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:32:04.386632  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:04.404202  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:32:04.421545  826329 start.go:296] duration metric: took 174.385446ms for postStartSetup
	I1208 00:32:04.421649  826329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:32:04.421695  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.439941  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.543929  826329 command_runner.go:130] > 13%
	I1208 00:32:04.544005  826329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:32:04.548692  826329 command_runner.go:130] > 169G
	I1208 00:32:04.548719  826329 fix.go:56] duration metric: took 1.572034198s for fixHost
	I1208 00:32:04.548730  826329 start.go:83] releasing machines lock for "functional-525396", held for 1.572067364s
	I1208 00:32:04.548856  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:04.565574  826329 ssh_runner.go:195] Run: cat /version.json
	I1208 00:32:04.565638  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.565923  826329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:32:04.565984  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.584847  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.600519  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.771794  826329 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:32:04.774495  826329 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:32:04.774657  826329 ssh_runner.go:195] Run: systemctl --version
	I1208 00:32:04.780874  826329 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:32:04.780917  826329 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:32:04.781367  826329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:32:04.818112  826329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:32:04.822491  826329 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:32:04.822532  826329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:32:04.822595  826329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:32:04.830492  826329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:32:04.830518  826329 start.go:496] detecting cgroup driver to use...
	I1208 00:32:04.830579  826329 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:32:04.830661  826329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:32:04.846467  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:32:04.859999  826329 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:32:04.860093  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:32:04.876040  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:32:04.889316  826329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:32:04.999380  826329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:32:05.135529  826329 docker.go:234] disabling docker service ...
	I1208 00:32:05.135652  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:32:05.150887  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:32:05.164082  826329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:32:05.274195  826329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:32:05.386139  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:32:05.399321  826329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:32:05.411741  826329 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 00:32:05.412925  826329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:32:05.413007  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.421375  826329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:32:05.421462  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.430145  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.438751  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.447666  826329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:32:05.455572  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.464290  826329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.472537  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.481189  826329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:32:05.487727  826329 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:32:05.488614  826329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:32:05.496261  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:05.603146  826329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:32:05.769023  826329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:32:05.769169  826329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:32:05.773391  826329 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 00:32:05.773452  826329 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:32:05.773473  826329 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1208 00:32:05.773494  826329 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:05.773524  826329 command_runner.go:130] > Access: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773553  826329 command_runner.go:130] > Modify: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773581  826329 command_runner.go:130] > Change: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773598  826329 command_runner.go:130] >  Birth: -
	I1208 00:32:05.774292  826329 start.go:564] Will wait 60s for crictl version
	I1208 00:32:05.774387  826329 ssh_runner.go:195] Run: which crictl
	I1208 00:32:05.778688  826329 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:32:05.779547  826329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:32:05.803509  826329 command_runner.go:130] > Version:  0.1.0
	I1208 00:32:05.803790  826329 command_runner.go:130] > RuntimeName:  cri-o
	I1208 00:32:05.804036  826329 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1208 00:32:05.804294  826329 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:32:05.806608  826329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:32:05.806739  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.840244  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.840321  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.840340  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.840361  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.840391  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.840415  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.840434  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.840452  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.840471  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.840498  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.840519  826329 command_runner.go:130] >      static
	I1208 00:32:05.840536  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.840553  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.840567  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.840593  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.840612  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.840629  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.840647  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.840664  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.840690  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.841800  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.872333  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.872357  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.872369  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.872376  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.872381  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.872385  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.872389  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.872395  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.872399  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.872408  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.872412  826329 command_runner.go:130] >      static
	I1208 00:32:05.872422  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.872437  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.872444  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.872448  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.872451  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.872457  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.872463  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.872467  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.872480  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.877414  826329 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:32:05.880269  826329 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:32:05.896780  826329 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:32:05.900764  826329 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:32:05.900873  826329 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:32:05.900985  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:05.901051  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.935654  826329 command_runner.go:130] > {
	I1208 00:32:05.935679  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.935684  826329 command_runner.go:130] >     {
	I1208 00:32:05.935694  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.935699  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935705  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.935708  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935713  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935724  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.935736  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.935743  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935756  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.935763  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935768  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935772  826329 command_runner.go:130] >     },
	I1208 00:32:05.935775  826329 command_runner.go:130] >     {
	I1208 00:32:05.935781  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.935787  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935793  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.935796  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935800  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935810  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.935821  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.935825  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935829  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.935836  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935845  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935853  826329 command_runner.go:130] >     },
	I1208 00:32:05.935857  826329 command_runner.go:130] >     {
	I1208 00:32:05.935864  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.935870  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935876  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.935879  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935885  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935894  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.935905  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.935908  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935912  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.935917  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.935923  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935927  826329 command_runner.go:130] >     },
	I1208 00:32:05.935932  826329 command_runner.go:130] >     {
	I1208 00:32:05.935938  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.935946  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935956  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.935962  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935967  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935975  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.935986  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.935990  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935994  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.936001  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936006  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936011  826329 command_runner.go:130] >       },
	I1208 00:32:05.936021  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936028  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936031  826329 command_runner.go:130] >     },
	I1208 00:32:05.936034  826329 command_runner.go:130] >     {
	I1208 00:32:05.936041  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.936048  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936053  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.936057  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936063  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936072  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.936083  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.936087  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936091  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.936095  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936101  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936105  826329 command_runner.go:130] >       },
	I1208 00:32:05.936110  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936116  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936119  826329 command_runner.go:130] >     },
	I1208 00:32:05.936122  826329 command_runner.go:130] >     {
	I1208 00:32:05.936129  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.936136  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936143  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.936152  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936160  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936169  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.936179  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.936184  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936189  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.936195  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936199  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936203  826329 command_runner.go:130] >       },
	I1208 00:32:05.936207  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936215  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936219  826329 command_runner.go:130] >     },
	I1208 00:32:05.936222  826329 command_runner.go:130] >     {
	I1208 00:32:05.936228  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.936235  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936240  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.936244  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936255  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936263  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.936271  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.936277  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936282  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.936288  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936292  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936295  826329 command_runner.go:130] >     },
	I1208 00:32:05.936298  826329 command_runner.go:130] >     {
	I1208 00:32:05.936306  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.936313  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936318  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.936322  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936326  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936336  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.936362  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.936372  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936377  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.936387  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936391  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936395  826329 command_runner.go:130] >       },
	I1208 00:32:05.936406  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936410  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936414  826329 command_runner.go:130] >     },
	I1208 00:32:05.936417  826329 command_runner.go:130] >     {
	I1208 00:32:05.936424  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.936432  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936437  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.936441  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936445  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936455  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.936465  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.936469  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936473  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.936483  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936487  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.936490  826329 command_runner.go:130] >       },
	I1208 00:32:05.936500  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936504  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.936507  826329 command_runner.go:130] >     }
	I1208 00:32:05.936510  826329 command_runner.go:130] >   ]
	I1208 00:32:05.936513  826329 command_runner.go:130] > }
	I1208 00:32:05.936690  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.936705  826329 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:32:05.936757  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.965491  826329 command_runner.go:130] > {
	I1208 00:32:05.965510  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.965515  826329 command_runner.go:130] >     {
	I1208 00:32:05.965525  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.965542  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965549  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.965553  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965557  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965584  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.965593  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.965596  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965600  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.965604  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965614  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965618  826329 command_runner.go:130] >     },
	I1208 00:32:05.965620  826329 command_runner.go:130] >     {
	I1208 00:32:05.965627  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.965630  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965635  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.965639  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965642  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965650  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.965659  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.965662  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965666  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.965669  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965675  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965679  826329 command_runner.go:130] >     },
	I1208 00:32:05.965682  826329 command_runner.go:130] >     {
	I1208 00:32:05.965689  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.965692  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965700  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.965704  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965708  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965715  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.965723  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.965726  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965733  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.965738  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.965741  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965744  826329 command_runner.go:130] >     },
	I1208 00:32:05.965747  826329 command_runner.go:130] >     {
	I1208 00:32:05.965754  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.965758  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965763  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.965768  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965772  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965779  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.965786  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.965789  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965793  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.965796  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965800  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965803  826329 command_runner.go:130] >       },
	I1208 00:32:05.965811  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965815  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965818  826329 command_runner.go:130] >     },
	I1208 00:32:05.965821  826329 command_runner.go:130] >     {
	I1208 00:32:05.965827  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.965831  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965841  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.965844  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965848  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965859  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.965867  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.965870  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965874  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.965877  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965881  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965884  826329 command_runner.go:130] >       },
	I1208 00:32:05.965891  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965895  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965898  826329 command_runner.go:130] >     },
	I1208 00:32:05.965901  826329 command_runner.go:130] >     {
	I1208 00:32:05.965907  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.965911  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965917  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.965920  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965924  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965932  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.965944  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.965947  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965951  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.965954  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965958  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965961  826329 command_runner.go:130] >       },
	I1208 00:32:05.965964  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965968  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965971  826329 command_runner.go:130] >     },
	I1208 00:32:05.965974  826329 command_runner.go:130] >     {
	I1208 00:32:05.965980  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.965984  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965989  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.965992  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965995  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966003  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.966013  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.966016  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966020  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.966023  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966027  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966030  826329 command_runner.go:130] >     },
	I1208 00:32:05.966033  826329 command_runner.go:130] >     {
	I1208 00:32:05.966042  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.966046  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966051  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.966054  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966058  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966066  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.966082  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.966086  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966090  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.966094  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966097  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.966100  826329 command_runner.go:130] >       },
	I1208 00:32:05.966104  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966109  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966112  826329 command_runner.go:130] >     },
	I1208 00:32:05.966117  826329 command_runner.go:130] >     {
	I1208 00:32:05.966124  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.966127  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966131  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.966136  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966140  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966149  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.966156  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.966160  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966163  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.966167  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966171  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.966173  826329 command_runner.go:130] >       },
	I1208 00:32:05.966177  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966180  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.966183  826329 command_runner.go:130] >     }
	I1208 00:32:05.966186  826329 command_runner.go:130] >   ]
	I1208 00:32:05.966189  826329 command_runner.go:130] > }
	I1208 00:32:05.968541  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.968564  826329 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:32:05.968572  826329 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:32:05.968676  826329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:32:05.968759  826329 ssh_runner.go:195] Run: crio config
	I1208 00:32:06.017314  826329 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 00:32:06.017338  826329 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 00:32:06.017347  826329 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 00:32:06.017350  826329 command_runner.go:130] > #
	I1208 00:32:06.017357  826329 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 00:32:06.017363  826329 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 00:32:06.017370  826329 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 00:32:06.017378  826329 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 00:32:06.017384  826329 command_runner.go:130] > # reload'.
	I1208 00:32:06.017391  826329 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 00:32:06.017404  826329 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 00:32:06.017411  826329 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 00:32:06.017417  826329 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 00:32:06.017423  826329 command_runner.go:130] > [crio]
	I1208 00:32:06.017429  826329 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 00:32:06.017434  826329 command_runner.go:130] > # containers images, in this directory.
	I1208 00:32:06.017704  826329 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 00:32:06.017722  826329 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 00:32:06.017729  826329 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1208 00:32:06.017738  826329 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1208 00:32:06.017898  826329 command_runner.go:130] > # imagestore = ""
	I1208 00:32:06.017914  826329 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 00:32:06.017922  826329 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 00:32:06.018164  826329 command_runner.go:130] > # storage_driver = "overlay"
	I1208 00:32:06.018180  826329 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 00:32:06.018187  826329 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 00:32:06.018278  826329 command_runner.go:130] > # storage_option = [
	I1208 00:32:06.018455  826329 command_runner.go:130] > # ]
	I1208 00:32:06.018487  826329 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 00:32:06.018500  826329 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 00:32:06.018675  826329 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 00:32:06.018694  826329 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 00:32:06.018706  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 00:32:06.018719  826329 command_runner.go:130] > # always happen on a node reboot
	I1208 00:32:06.018990  826329 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 00:32:06.019024  826329 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 00:32:06.019035  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 00:32:06.019041  826329 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 00:32:06.019224  826329 command_runner.go:130] > # version_file_persist = ""
	I1208 00:32:06.019243  826329 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 00:32:06.019258  826329 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 00:32:06.019484  826329 command_runner.go:130] > # internal_wipe = true
	I1208 00:32:06.019500  826329 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1208 00:32:06.019507  826329 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1208 00:32:06.019754  826329 command_runner.go:130] > # internal_repair = true
	I1208 00:32:06.019769  826329 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 00:32:06.019785  826329 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 00:32:06.019793  826329 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 00:32:06.020120  826329 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 00:32:06.020138  826329 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 00:32:06.020143  826329 command_runner.go:130] > [crio.api]
	I1208 00:32:06.020148  826329 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 00:32:06.020346  826329 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 00:32:06.020366  826329 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 00:32:06.020581  826329 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 00:32:06.020605  826329 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 00:32:06.020611  826329 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 00:32:06.020863  826329 command_runner.go:130] > # stream_port = "0"
	I1208 00:32:06.020878  826329 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 00:32:06.021158  826329 command_runner.go:130] > # stream_enable_tls = false
	I1208 00:32:06.021176  826329 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 00:32:06.021352  826329 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 00:32:06.021367  826329 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 00:32:06.021380  826329 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021617  826329 command_runner.go:130] > # stream_tls_cert = ""
	I1208 00:32:06.021634  826329 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 00:32:06.021641  826329 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021794  826329 command_runner.go:130] > # stream_tls_key = ""
	I1208 00:32:06.021808  826329 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 00:32:06.021824  826329 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 00:32:06.021840  826329 command_runner.go:130] > # automatically pick up the changes.
	I1208 00:32:06.022038  826329 command_runner.go:130] > # stream_tls_ca = ""
	I1208 00:32:06.022075  826329 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022282  826329 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 00:32:06.022297  826329 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022560  826329 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 00:32:06.022581  826329 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 00:32:06.022589  826329 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 00:32:06.022596  826329 command_runner.go:130] > [crio.runtime]
	I1208 00:32:06.022603  826329 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 00:32:06.022613  826329 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 00:32:06.022618  826329 command_runner.go:130] > # "nofile=1024:2048"
	I1208 00:32:06.022627  826329 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 00:32:06.022736  826329 command_runner.go:130] > # default_ulimits = [
	I1208 00:32:06.022966  826329 command_runner.go:130] > # ]
	I1208 00:32:06.022982  826329 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 00:32:06.023192  826329 command_runner.go:130] > # no_pivot = false
	I1208 00:32:06.023203  826329 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 00:32:06.023210  826329 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 00:32:06.023435  826329 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 00:32:06.023449  826329 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 00:32:06.023455  826329 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 00:32:06.023463  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023655  826329 command_runner.go:130] > # conmon = ""
	I1208 00:32:06.023668  826329 command_runner.go:130] > # Cgroup setting for conmon
	I1208 00:32:06.023697  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 00:32:06.023812  826329 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 00:32:06.023826  826329 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 00:32:06.023831  826329 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 00:32:06.023839  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023982  826329 command_runner.go:130] > # conmon_env = [
	I1208 00:32:06.024123  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024147  826329 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 00:32:06.024153  826329 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 00:32:06.024161  826329 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 00:32:06.024313  826329 command_runner.go:130] > # default_env = [
	I1208 00:32:06.024407  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024424  826329 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 00:32:06.024439  826329 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1208 00:32:06.024689  826329 command_runner.go:130] > # selinux = false
	I1208 00:32:06.024713  826329 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 00:32:06.024722  826329 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1208 00:32:06.024727  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.024963  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.024977  826329 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1208 00:32:06.024983  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025171  826329 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1208 00:32:06.025185  826329 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 00:32:06.025199  826329 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 00:32:06.025214  826329 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 00:32:06.025222  826329 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 00:32:06.025227  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025459  826329 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 00:32:06.025474  826329 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 00:32:06.025479  826329 command_runner.go:130] > # the cgroup blockio controller.
	I1208 00:32:06.025701  826329 command_runner.go:130] > # blockio_config_file = ""
	I1208 00:32:06.025716  826329 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1208 00:32:06.025721  826329 command_runner.go:130] > # blockio parameters.
	I1208 00:32:06.025998  826329 command_runner.go:130] > # blockio_reload = false
	I1208 00:32:06.026018  826329 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 00:32:06.026025  826329 command_runner.go:130] > # irqbalance daemon.
	I1208 00:32:06.026221  826329 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 00:32:06.026241  826329 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1208 00:32:06.026249  826329 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1208 00:32:06.026257  826329 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1208 00:32:06.026494  826329 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1208 00:32:06.026510  826329 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 00:32:06.026517  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.026722  826329 command_runner.go:130] > # rdt_config_file = ""
	I1208 00:32:06.026753  826329 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 00:32:06.026902  826329 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 00:32:06.026919  826329 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 00:32:06.027125  826329 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 00:32:06.027138  826329 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 00:32:06.027163  826329 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 00:32:06.027177  826329 command_runner.go:130] > # will be added.
	I1208 00:32:06.027277  826329 command_runner.go:130] > # default_capabilities = [
	I1208 00:32:06.027581  826329 command_runner.go:130] > # 	"CHOWN",
	I1208 00:32:06.027682  826329 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 00:32:06.027912  826329 command_runner.go:130] > # 	"FSETID",
	I1208 00:32:06.028073  826329 command_runner.go:130] > # 	"FOWNER",
	I1208 00:32:06.028166  826329 command_runner.go:130] > # 	"SETGID",
	I1208 00:32:06.028351  826329 command_runner.go:130] > # 	"SETUID",
	I1208 00:32:06.028526  826329 command_runner.go:130] > # 	"SETPCAP",
	I1208 00:32:06.028680  826329 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 00:32:06.028802  826329 command_runner.go:130] > # 	"KILL",
	I1208 00:32:06.028996  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029019  826329 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 00:32:06.029028  826329 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 00:32:06.029301  826329 command_runner.go:130] > # add_inheritable_capabilities = false
	I1208 00:32:06.029326  826329 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 00:32:06.029333  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029338  826329 command_runner.go:130] > default_sysctls = [
	I1208 00:32:06.029464  826329 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1208 00:32:06.029477  826329 command_runner.go:130] > ]
	I1208 00:32:06.029483  826329 command_runner.go:130] > # List of devices on the host that a
	I1208 00:32:06.029491  826329 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 00:32:06.029495  826329 command_runner.go:130] > # allowed_devices = [
	I1208 00:32:06.029499  826329 command_runner.go:130] > # 	"/dev/fuse",
	I1208 00:32:06.029507  826329 command_runner.go:130] > # 	"/dev/net/tun",
	I1208 00:32:06.029726  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029756  826329 command_runner.go:130] > # List of additional devices. specified as
	I1208 00:32:06.029769  826329 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 00:32:06.029775  826329 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 00:32:06.029782  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029898  826329 command_runner.go:130] > # additional_devices = [
	I1208 00:32:06.029911  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029918  826329 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 00:32:06.029922  826329 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 00:32:06.030014  826329 command_runner.go:130] > # 	"/etc/cdi",
	I1208 00:32:06.030033  826329 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 00:32:06.030037  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030045  826329 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 00:32:06.030051  826329 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 00:32:06.030058  826329 command_runner.go:130] > # Defaults to false.
	I1208 00:32:06.030179  826329 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 00:32:06.030194  826329 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 00:32:06.030201  826329 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 00:32:06.030206  826329 command_runner.go:130] > # hooks_dir = [
	I1208 00:32:06.030462  826329 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 00:32:06.030539  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030554  826329 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 00:32:06.030561  826329 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 00:32:06.030592  826329 command_runner.go:130] > # its default mounts from the following two files:
	I1208 00:32:06.030598  826329 command_runner.go:130] > #
	I1208 00:32:06.030608  826329 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 00:32:06.030631  826329 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 00:32:06.030642  826329 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 00:32:06.030646  826329 command_runner.go:130] > #
	I1208 00:32:06.030658  826329 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 00:32:06.030668  826329 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 00:32:06.030675  826329 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 00:32:06.030680  826329 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 00:32:06.030684  826329 command_runner.go:130] > #
	I1208 00:32:06.030688  826329 command_runner.go:130] > # default_mounts_file = ""
	I1208 00:32:06.030697  826329 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 00:32:06.030710  826329 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 00:32:06.030795  826329 command_runner.go:130] > # pids_limit = -1
	I1208 00:32:06.030811  826329 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 00:32:06.030858  826329 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 00:32:06.030867  826329 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 00:32:06.030881  826329 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 00:32:06.030886  826329 command_runner.go:130] > # log_size_max = -1
	I1208 00:32:06.030903  826329 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1208 00:32:06.031086  826329 command_runner.go:130] > # log_to_journald = false
	I1208 00:32:06.031102  826329 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 00:32:06.031167  826329 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 00:32:06.031181  826329 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 00:32:06.031241  826329 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 00:32:06.031258  826329 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 00:32:06.031327  826329 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 00:32:06.031335  826329 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 00:32:06.031339  826329 command_runner.go:130] > # read_only = false
	I1208 00:32:06.031345  826329 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 00:32:06.031377  826329 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 00:32:06.031383  826329 command_runner.go:130] > # live configuration reload.
	I1208 00:32:06.031388  826329 command_runner.go:130] > # log_level = "info"
	I1208 00:32:06.031397  826329 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 00:32:06.031408  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.031412  826329 command_runner.go:130] > # log_filter = ""
	I1208 00:32:06.031419  826329 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031430  826329 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 00:32:06.031434  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031452  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031456  826329 command_runner.go:130] > # uid_mappings = ""
	I1208 00:32:06.031462  826329 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031468  826329 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 00:32:06.031472  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031482  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031553  826329 command_runner.go:130] > # gid_mappings = ""
	I1208 00:32:06.031569  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 00:32:06.031632  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031648  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031656  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031742  826329 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 00:32:06.031759  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 00:32:06.031785  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031798  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031807  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.032017  826329 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 00:32:06.032056  826329 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 00:32:06.032071  826329 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 00:32:06.032077  826329 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 00:32:06.032099  826329 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 00:32:06.032106  826329 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 00:32:06.032112  826329 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 00:32:06.032205  826329 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 00:32:06.032267  826329 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 00:32:06.032278  826329 command_runner.go:130] > # drop_infra_ctr = true
	I1208 00:32:06.032285  826329 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 00:32:06.032292  826329 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 00:32:06.032307  826329 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 00:32:06.032340  826329 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 00:32:06.032356  826329 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1208 00:32:06.032371  826329 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1208 00:32:06.032378  826329 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1208 00:32:06.032384  826329 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1208 00:32:06.032394  826329 command_runner.go:130] > # shared_cpuset = ""
	I1208 00:32:06.032400  826329 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 00:32:06.032411  826329 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 00:32:06.032448  826329 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 00:32:06.032463  826329 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 00:32:06.032467  826329 command_runner.go:130] > # pinns_path = ""
	I1208 00:32:06.032473  826329 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1208 00:32:06.032479  826329 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1208 00:32:06.032487  826329 command_runner.go:130] > # enable_criu_support = true
	I1208 00:32:06.032493  826329 command_runner.go:130] > # Enable/disable the generation of the container,
	I1208 00:32:06.032500  826329 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1208 00:32:06.032732  826329 command_runner.go:130] > # enable_pod_events = false
	I1208 00:32:06.032748  826329 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 00:32:06.032827  826329 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1208 00:32:06.032846  826329 command_runner.go:130] > # default_runtime = "crun"
	I1208 00:32:06.032871  826329 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 00:32:06.032889  826329 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 00:32:06.032901  826329 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1208 00:32:06.032911  826329 command_runner.go:130] > # creation as a file is not desired either.
	I1208 00:32:06.032919  826329 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 00:32:06.032929  826329 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 00:32:06.032938  826329 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 00:32:06.032974  826329 command_runner.go:130] > # ]
	I1208 00:32:06.033041  826329 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 00:32:06.033057  826329 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 00:32:06.033064  826329 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1208 00:32:06.033070  826329 command_runner.go:130] > # Each entry in the table should follow the format:
	I1208 00:32:06.033073  826329 command_runner.go:130] > #
	I1208 00:32:06.033106  826329 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1208 00:32:06.033112  826329 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1208 00:32:06.033117  826329 command_runner.go:130] > # runtime_type = "oci"
	I1208 00:32:06.033192  826329 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1208 00:32:06.033209  826329 command_runner.go:130] > # inherit_default_runtime = false
	I1208 00:32:06.033214  826329 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1208 00:32:06.033219  826329 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1208 00:32:06.033225  826329 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1208 00:32:06.033228  826329 command_runner.go:130] > # monitor_env = []
	I1208 00:32:06.033233  826329 command_runner.go:130] > # privileged_without_host_devices = false
	I1208 00:32:06.033237  826329 command_runner.go:130] > # allowed_annotations = []
	I1208 00:32:06.033263  826329 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1208 00:32:06.033276  826329 command_runner.go:130] > # no_sync_log = false
	I1208 00:32:06.033282  826329 command_runner.go:130] > # default_annotations = {}
	I1208 00:32:06.033376  826329 command_runner.go:130] > # stream_websockets = false
	I1208 00:32:06.033384  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.033433  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.033444  826329 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1208 00:32:06.033456  826329 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1208 00:32:06.033467  826329 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 00:32:06.033474  826329 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 00:32:06.033477  826329 command_runner.go:130] > #   in $PATH.
	I1208 00:32:06.033483  826329 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1208 00:32:06.033489  826329 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 00:32:06.033495  826329 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1208 00:32:06.033504  826329 command_runner.go:130] > #   state.
	I1208 00:32:06.033518  826329 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 00:32:06.033528  826329 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 00:32:06.033535  826329 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1208 00:32:06.033547  826329 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1208 00:32:06.033552  826329 command_runner.go:130] > #   the values from the default runtime on load time.
	I1208 00:32:06.033558  826329 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 00:32:06.033563  826329 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 00:32:06.033604  826329 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 00:32:06.033610  826329 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 00:32:06.033615  826329 command_runner.go:130] > #   The currently recognized values are:
	I1208 00:32:06.033697  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 00:32:06.033736  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 00:32:06.033745  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 00:32:06.033760  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 00:32:06.033770  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 00:32:06.033787  826329 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 00:32:06.033799  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1208 00:32:06.033811  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1208 00:32:06.033818  826329 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 00:32:06.033824  826329 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1208 00:32:06.033832  826329 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1208 00:32:06.033842  826329 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1208 00:32:06.033851  826329 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1208 00:32:06.033863  826329 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1208 00:32:06.033869  826329 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1208 00:32:06.033883  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1208 00:32:06.033892  826329 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1208 00:32:06.033896  826329 command_runner.go:130] > #   deprecated option "conmon".
	I1208 00:32:06.033903  826329 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1208 00:32:06.033908  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1208 00:32:06.033916  826329 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1208 00:32:06.033925  826329 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 00:32:06.033933  826329 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1208 00:32:06.033944  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1208 00:32:06.033955  826329 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1208 00:32:06.033959  826329 command_runner.go:130] > #   conmon-rs by using:
	I1208 00:32:06.033976  826329 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1208 00:32:06.033990  826329 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1208 00:32:06.033998  826329 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1208 00:32:06.034005  826329 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1208 00:32:06.034012  826329 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1208 00:32:06.034036  826329 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1208 00:32:06.034044  826329 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1208 00:32:06.034064  826329 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1208 00:32:06.034074  826329 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1208 00:32:06.034087  826329 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1208 00:32:06.034557  826329 command_runner.go:130] > #   when a machine crash happens.
	I1208 00:32:06.034567  826329 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1208 00:32:06.034582  826329 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1208 00:32:06.034589  826329 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1208 00:32:06.034594  826329 command_runner.go:130] > #   seccomp profile for the runtime.
	I1208 00:32:06.034680  826329 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1208 00:32:06.034713  826329 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1208 00:32:06.034720  826329 command_runner.go:130] > #
	I1208 00:32:06.034732  826329 command_runner.go:130] > # Using the seccomp notifier feature:
	I1208 00:32:06.034735  826329 command_runner.go:130] > #
	I1208 00:32:06.034742  826329 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1208 00:32:06.034749  826329 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1208 00:32:06.034762  826329 command_runner.go:130] > #
	I1208 00:32:06.034769  826329 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1208 00:32:06.034785  826329 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1208 00:32:06.034788  826329 command_runner.go:130] > #
	I1208 00:32:06.034795  826329 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1208 00:32:06.034799  826329 command_runner.go:130] > # feature.
	I1208 00:32:06.034802  826329 command_runner.go:130] > #
	I1208 00:32:06.034808  826329 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1208 00:32:06.034819  826329 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1208 00:32:06.034825  826329 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1208 00:32:06.034837  826329 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1208 00:32:06.034858  826329 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1208 00:32:06.034861  826329 command_runner.go:130] > #
	I1208 00:32:06.034867  826329 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1208 00:32:06.034878  826329 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1208 00:32:06.034881  826329 command_runner.go:130] > #
	I1208 00:32:06.034887  826329 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1208 00:32:06.034897  826329 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1208 00:32:06.034900  826329 command_runner.go:130] > #
	I1208 00:32:06.034906  826329 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1208 00:32:06.034916  826329 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1208 00:32:06.034920  826329 command_runner.go:130] > # limitation.
	I1208 00:32:06.034927  826329 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1208 00:32:06.034932  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1208 00:32:06.034939  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.034944  826329 command_runner.go:130] > runtime_root = "/run/crun"
	I1208 00:32:06.034954  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.034958  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.034962  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.034972  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.034976  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.034981  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.034990  826329 command_runner.go:130] > allowed_annotations = [
	I1208 00:32:06.034999  826329 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1208 00:32:06.035002  826329 command_runner.go:130] > ]
	I1208 00:32:06.035007  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035011  826329 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 00:32:06.035016  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1208 00:32:06.035020  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.035024  826329 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 00:32:06.035034  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.035038  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.035042  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.035046  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.035050  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.035054  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.035145  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035184  826329 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 00:32:06.035191  826329 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 00:32:06.035197  826329 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 00:32:06.035205  826329 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 00:32:06.035222  826329 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1208 00:32:06.035233  826329 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1208 00:32:06.035249  826329 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1208 00:32:06.035255  826329 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 00:32:06.035265  826329 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 00:32:06.035274  826329 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 00:32:06.035280  826329 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 00:32:06.035291  826329 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 00:32:06.035294  826329 command_runner.go:130] > # Example:
	I1208 00:32:06.035299  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 00:32:06.035309  826329 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 00:32:06.035318  826329 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 00:32:06.035324  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 00:32:06.035413  826329 command_runner.go:130] > # cpuset = "0-1"
	I1208 00:32:06.035447  826329 command_runner.go:130] > # cpushares = "5"
	I1208 00:32:06.035460  826329 command_runner.go:130] > # cpuquota = "1000"
	I1208 00:32:06.035471  826329 command_runner.go:130] > # cpuperiod = "100000"
	I1208 00:32:06.035475  826329 command_runner.go:130] > # cpulimit = "35"
	I1208 00:32:06.035479  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.035483  826329 command_runner.go:130] > # The workload name is workload-type.
	I1208 00:32:06.035497  826329 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 00:32:06.035502  826329 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 00:32:06.035540  826329 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 00:32:06.035556  826329 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 00:32:06.035563  826329 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 00:32:06.035576  826329 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1208 00:32:06.035584  826329 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1208 00:32:06.035592  826329 command_runner.go:130] > # Default value is set to true
	I1208 00:32:06.035597  826329 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1208 00:32:06.035603  826329 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1208 00:32:06.035607  826329 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1208 00:32:06.035703  826329 command_runner.go:130] > # Default value is set to 'false'
	I1208 00:32:06.035729  826329 command_runner.go:130] > # disable_hostport_mapping = false
	I1208 00:32:06.035736  826329 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1208 00:32:06.035751  826329 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1208 00:32:06.035755  826329 command_runner.go:130] > # timezone = ""
	I1208 00:32:06.035762  826329 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 00:32:06.035769  826329 command_runner.go:130] > #
	I1208 00:32:06.035775  826329 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 00:32:06.035782  826329 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1208 00:32:06.035785  826329 command_runner.go:130] > [crio.image]
	I1208 00:32:06.035791  826329 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 00:32:06.035796  826329 command_runner.go:130] > # default_transport = "docker://"
	I1208 00:32:06.035802  826329 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 00:32:06.035813  826329 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035818  826329 command_runner.go:130] > # global_auth_file = ""
	I1208 00:32:06.035823  826329 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 00:32:06.035833  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035852  826329 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1208 00:32:06.035863  826329 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 00:32:06.035874  826329 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035950  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035964  826329 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 00:32:06.035972  826329 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 00:32:06.035989  826329 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 00:32:06.035998  826329 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 00:32:06.036009  826329 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 00:32:06.036013  826329 command_runner.go:130] > # pause_command = "/pause"
	I1208 00:32:06.036019  826329 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1208 00:32:06.036030  826329 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1208 00:32:06.036036  826329 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1208 00:32:06.036043  826329 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1208 00:32:06.036052  826329 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1208 00:32:06.036058  826329 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1208 00:32:06.036062  826329 command_runner.go:130] > # pinned_images = [
	I1208 00:32:06.036065  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036071  826329 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 00:32:06.036077  826329 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 00:32:06.036087  826329 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 00:32:06.036093  826329 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 00:32:06.036104  826329 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 00:32:06.036109  826329 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1208 00:32:06.036115  826329 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1208 00:32:06.036126  826329 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1208 00:32:06.036133  826329 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1208 00:32:06.036139  826329 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1208 00:32:06.036145  826329 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1208 00:32:06.036150  826329 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1208 00:32:06.036160  826329 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 00:32:06.036167  826329 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 00:32:06.036172  826329 command_runner.go:130] > # changing them here.
	I1208 00:32:06.036184  826329 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1208 00:32:06.036193  826329 command_runner.go:130] > # insecure_registries = [
	I1208 00:32:06.036196  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036300  826329 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 00:32:06.036317  826329 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 00:32:06.036326  826329 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 00:32:06.036331  826329 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 00:32:06.036335  826329 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 00:32:06.036342  826329 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1208 00:32:06.036353  826329 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1208 00:32:06.036358  826329 command_runner.go:130] > # auto_reload_registries = false
	I1208 00:32:06.036365  826329 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1208 00:32:06.036377  826329 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1208 00:32:06.036388  826329 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1208 00:32:06.036393  826329 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1208 00:32:06.036398  826329 command_runner.go:130] > # The mode of short name resolution.
	I1208 00:32:06.036404  826329 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1208 00:32:06.036418  826329 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1208 00:32:06.036424  826329 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1208 00:32:06.036433  826329 command_runner.go:130] > # short_name_mode = "enforcing"
	I1208 00:32:06.036439  826329 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1208 00:32:06.036446  826329 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1208 00:32:06.036457  826329 command_runner.go:130] > # oci_artifact_mount_support = true
	I1208 00:32:06.036463  826329 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 00:32:06.036466  826329 command_runner.go:130] > # CNI plugins.
	I1208 00:32:06.036469  826329 command_runner.go:130] > [crio.network]
	I1208 00:32:06.036476  826329 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 00:32:06.036481  826329 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 00:32:06.036485  826329 command_runner.go:130] > # cni_default_network = ""
	I1208 00:32:06.036496  826329 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 00:32:06.036501  826329 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 00:32:06.036506  826329 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 00:32:06.036515  826329 command_runner.go:130] > # plugin_dirs = [
	I1208 00:32:06.036642  826329 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 00:32:06.036668  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036675  826329 command_runner.go:130] > # List of included pod metrics.
	I1208 00:32:06.036679  826329 command_runner.go:130] > # included_pod_metrics = [
	I1208 00:32:06.036860  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036921  826329 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 00:32:06.036927  826329 command_runner.go:130] > [crio.metrics]
	I1208 00:32:06.036932  826329 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 00:32:06.036937  826329 command_runner.go:130] > # enable_metrics = false
	I1208 00:32:06.036942  826329 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 00:32:06.036953  826329 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 00:32:06.036960  826329 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 00:32:06.036994  826329 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 00:32:06.037043  826329 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 00:32:06.037079  826329 command_runner.go:130] > # metrics_collectors = [
	I1208 00:32:06.037090  826329 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 00:32:06.037155  826329 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1208 00:32:06.037178  826329 command_runner.go:130] > # 	"containers_oom_total",
	I1208 00:32:06.037336  826329 command_runner.go:130] > # 	"processes_defunct",
	I1208 00:32:06.037413  826329 command_runner.go:130] > # 	"operations_total",
	I1208 00:32:06.037662  826329 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 00:32:06.037734  826329 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 00:32:06.037748  826329 command_runner.go:130] > # 	"operations_errors_total",
	I1208 00:32:06.037753  826329 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 00:32:06.037772  826329 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 00:32:06.037792  826329 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 00:32:06.037922  826329 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 00:32:06.037987  826329 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 00:32:06.038011  826329 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 00:32:06.038021  826329 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1208 00:32:06.038045  826329 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1208 00:32:06.038193  826329 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1208 00:32:06.038255  826329 command_runner.go:130] > # ]
	I1208 00:32:06.038268  826329 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1208 00:32:06.038283  826329 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1208 00:32:06.038321  826329 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 00:32:06.038335  826329 command_runner.go:130] > # metrics_port = 9090
	I1208 00:32:06.038341  826329 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 00:32:06.038408  826329 command_runner.go:130] > # metrics_socket = ""
	I1208 00:32:06.038423  826329 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 00:32:06.038430  826329 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 00:32:06.038449  826329 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 00:32:06.038461  826329 command_runner.go:130] > # certificate on any modification event.
	I1208 00:32:06.038588  826329 command_runner.go:130] > # metrics_cert = ""
	I1208 00:32:06.038614  826329 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 00:32:06.038622  826329 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 00:32:06.038740  826329 command_runner.go:130] > # metrics_key = ""
	I1208 00:32:06.038809  826329 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 00:32:06.038823  826329 command_runner.go:130] > [crio.tracing]
	I1208 00:32:06.038829  826329 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 00:32:06.038833  826329 command_runner.go:130] > # enable_tracing = false
	I1208 00:32:06.038876  826329 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 00:32:06.038890  826329 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1208 00:32:06.038899  826329 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1208 00:32:06.038973  826329 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 00:32:06.038987  826329 command_runner.go:130] > # CRI-O NRI configuration.
	I1208 00:32:06.038992  826329 command_runner.go:130] > [crio.nri]
	I1208 00:32:06.039013  826329 command_runner.go:130] > # Globally enable or disable NRI.
	I1208 00:32:06.039024  826329 command_runner.go:130] > # enable_nri = true
	I1208 00:32:06.039029  826329 command_runner.go:130] > # NRI socket to listen on.
	I1208 00:32:06.039033  826329 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1208 00:32:06.039044  826329 command_runner.go:130] > # NRI plugin directory to use.
	I1208 00:32:06.039198  826329 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1208 00:32:06.039225  826329 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1208 00:32:06.039233  826329 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1208 00:32:06.039239  826329 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1208 00:32:06.039363  826329 command_runner.go:130] > # nri_disable_connections = false
	I1208 00:32:06.039381  826329 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1208 00:32:06.039476  826329 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1208 00:32:06.039494  826329 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1208 00:32:06.039499  826329 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1208 00:32:06.039504  826329 command_runner.go:130] > # NRI default validator configuration.
	I1208 00:32:06.039511  826329 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1208 00:32:06.039518  826329 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1208 00:32:06.039557  826329 command_runner.go:130] > # can be restricted/rejected:
	I1208 00:32:06.039568  826329 command_runner.go:130] > # - OCI hook injection
	I1208 00:32:06.039573  826329 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1208 00:32:06.039586  826329 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1208 00:32:06.039595  826329 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1208 00:32:06.039600  826329 command_runner.go:130] > # - adjustment of linux namespaces
	I1208 00:32:06.039606  826329 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1208 00:32:06.039685  826329 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1208 00:32:06.039812  826329 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1208 00:32:06.039825  826329 command_runner.go:130] > #
	I1208 00:32:06.039830  826329 command_runner.go:130] > # [crio.nri.default_validator]
	I1208 00:32:06.039911  826329 command_runner.go:130] > # nri_enable_default_validator = false
	I1208 00:32:06.039939  826329 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1208 00:32:06.039947  826329 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1208 00:32:06.039959  826329 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1208 00:32:06.039966  826329 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1208 00:32:06.039971  826329 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1208 00:32:06.039975  826329 command_runner.go:130] > # nri_validator_required_plugins = [
	I1208 00:32:06.039978  826329 command_runner.go:130] > # ]
	I1208 00:32:06.039984  826329 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1208 00:32:06.039994  826329 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 00:32:06.040003  826329 command_runner.go:130] > [crio.stats]
	I1208 00:32:06.040013  826329 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 00:32:06.040019  826329 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 00:32:06.040027  826329 command_runner.go:130] > # stats_collection_period = 0
	I1208 00:32:06.040033  826329 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1208 00:32:06.040043  826329 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1208 00:32:06.040047  826329 command_runner.go:130] > # collection_period = 0
	I1208 00:32:06.041802  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994368044Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1208 00:32:06.041819  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994407331Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1208 00:32:06.041829  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994434752Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1208 00:32:06.041836  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994457826Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1208 00:32:06.041847  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994536038Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:06.041867  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994955873Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1208 00:32:06.041895  826329 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 00:32:06.042057  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:06.042089  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:06.042117  826329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:32:06.042147  826329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:32:06.042284  826329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:32:06.042367  826329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:32:06.049993  826329 command_runner.go:130] > kubeadm
	I1208 00:32:06.050024  826329 command_runner.go:130] > kubectl
	I1208 00:32:06.050029  826329 command_runner.go:130] > kubelet
	I1208 00:32:06.051018  826329 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:32:06.051091  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:32:06.059413  826329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:32:06.073688  826329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:32:06.087599  826329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:32:06.100920  826329 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:32:06.104607  826329 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:32:06.104862  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:06.223310  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:06.506702  826329 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:32:06.506774  826329 certs.go:195] generating shared ca certs ...
	I1208 00:32:06.506805  826329 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:06.507033  826329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:32:06.507124  826329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:32:06.507152  826329 certs.go:257] generating profile certs ...
	I1208 00:32:06.507310  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:32:06.507422  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:32:06.507510  826329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:32:06.507537  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:32:06.507566  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:32:06.507605  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:32:06.507636  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:32:06.507680  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:32:06.507713  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:32:06.507755  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:32:06.507788  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:32:06.507873  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:32:06.507940  826329 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:32:06.507964  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:32:06.508024  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:32:06.508086  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:32:06.508156  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:32:06.508255  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:06.508336  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.508374  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.508417  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.509152  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:32:06.534629  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:32:06.554458  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:32:06.573968  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:32:06.590997  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:32:06.608508  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:32:06.625424  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:32:06.642336  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:32:06.660002  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:32:06.677652  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:32:06.695647  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:32:06.713354  826329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:32:06.725836  826329 ssh_runner.go:195] Run: openssl version
	I1208 00:32:06.731951  826329 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:32:06.732096  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.739312  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:32:06.746650  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750259  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750312  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750360  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.790520  826329 command_runner.go:130] > 51391683
	I1208 00:32:06.791045  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:32:06.798345  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.805645  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:32:06.813042  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816781  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816807  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816859  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.857524  826329 command_runner.go:130] > 3ec20f2e
	I1208 00:32:06.857994  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:32:06.865262  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.872409  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:32:06.879529  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883021  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883115  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883198  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.923843  826329 command_runner.go:130] > b5213941
	I1208 00:32:06.924322  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:32:06.931656  826329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935287  826329 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935325  826329 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:32:06.935332  826329 command_runner.go:130] > Device: 259,1	Inode: 1322385     Links: 1
	I1208 00:32:06.935354  826329 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:06.935369  826329 command_runner.go:130] > Access: 2025-12-08 00:27:59.408752113 +0000
	I1208 00:32:06.935374  826329 command_runner.go:130] > Modify: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935396  826329 command_runner.go:130] > Change: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935407  826329 command_runner.go:130] >  Birth: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935530  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:32:06.975831  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:06.976261  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:32:07.017790  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.017978  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:32:07.058488  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.058966  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:32:07.099457  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.099917  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:32:07.141471  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.141903  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:32:07.182188  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.182659  826329 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:07.182760  826329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:32:07.182825  826329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:32:07.209144  826329 cri.go:89] found id: ""
	I1208 00:32:07.209214  826329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:32:07.216134  826329 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:32:07.216154  826329 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:32:07.216162  826329 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:32:07.217097  826329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:32:07.217114  826329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:32:07.217178  826329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:32:07.224428  826329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:32:07.224856  826329 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.224961  826329 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "functional-525396" cluster setting kubeconfig missing "functional-525396" context setting]
	I1208 00:32:07.225241  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.225667  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.225818  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.226341  826329 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:32:07.226363  826329 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:32:07.226369  826329 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:32:07.226375  826329 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:32:07.226381  826329 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:32:07.226674  826329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:32:07.226772  826329 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:32:07.234310  826329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:32:07.234378  826329 kubeadm.go:602] duration metric: took 17.25872ms to restartPrimaryControlPlane
	I1208 00:32:07.234395  826329 kubeadm.go:403] duration metric: took 51.743543ms to StartCluster
	I1208 00:32:07.234412  826329 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.234484  826329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.235129  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.235358  826329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:32:07.235583  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:07.235658  826329 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:32:07.235740  826329 addons.go:70] Setting storage-provisioner=true in profile "functional-525396"
	I1208 00:32:07.235754  826329 addons.go:239] Setting addon storage-provisioner=true in "functional-525396"
	I1208 00:32:07.235778  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.236237  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.236576  826329 addons.go:70] Setting default-storageclass=true in profile "functional-525396"
	I1208 00:32:07.236601  826329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-525396"
	I1208 00:32:07.236875  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.242309  826329 out.go:179] * Verifying Kubernetes components...
	I1208 00:32:07.245184  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:07.271460  826329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:32:07.274400  826329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.274424  826329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:32:07.274492  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.276071  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.276241  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.276512  826329 addons.go:239] Setting addon default-storageclass=true in "functional-525396"
	I1208 00:32:07.276540  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.276944  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.314823  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.318477  826329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:07.318497  826329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:32:07.318558  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.352646  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.447557  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:07.488721  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.519084  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.257520  826329 node_ready.go:35] waiting up to 6m0s for node "functional-525396" to be "Ready" ...
	I1208 00:32:08.257618  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257654  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257688  826329 retry.go:31] will retry after 154.925821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257654  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.257704  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257722  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257734  826329 retry.go:31] will retry after 240.899479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257750  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.258076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.413579  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.477856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.477934  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.477962  826329 retry.go:31] will retry after 471.79599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.499019  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.559244  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.559341  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.559365  826329 retry.go:31] will retry after 419.613997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.758693  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.758772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.759084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.950598  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.979140  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.022887  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.022933  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.022979  826329 retry.go:31] will retry after 789.955074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083550  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.083656  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083684  826329 retry.go:31] will retry after 584.522236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.668477  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.723720  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.727856  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.727932  826329 retry.go:31] will retry after 996.136704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.757987  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.758082  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.813684  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:09.865943  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.869391  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.869422  826329 retry.go:31] will retry after 1.082403251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.257910  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:10.258329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.724942  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:10.758490  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.758896  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.786956  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:10.787023  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.787045  826329 retry.go:31] will retry after 1.653307887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.952461  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:11.017630  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:11.017682  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.017706  826329 retry.go:31] will retry after 1.450018323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.257721  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.258081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.757826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.757911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.258016  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.258092  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.258398  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:12.258449  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.440941  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:12.468519  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:12.523147  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.523192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.523212  826329 retry.go:31] will retry after 1.808868247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537050  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.537096  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537115  826329 retry.go:31] will retry after 1.005297336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.758616  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.758689  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.758985  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.257733  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.542714  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:13.607721  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:13.607772  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.607793  826329 retry.go:31] will retry after 2.59048957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.758025  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.758103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.257759  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.257837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.332402  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:14.393856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:14.393908  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.393927  826329 retry.go:31] will retry after 3.003957784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.758447  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.758779  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:14.758833  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:15.258432  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.258504  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.258873  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.758697  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.758770  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.198619  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:16.257994  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.258110  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.258333  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.261663  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:16.261706  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.261724  826329 retry.go:31] will retry after 3.921003057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.758355  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.758442  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.758740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.258595  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.258667  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.259014  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:17.259070  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:17.398537  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:17.459046  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:17.459087  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.459108  826329 retry.go:31] will retry after 6.352068949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.758636  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.758713  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.759027  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.757758  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.758113  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.258205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.757895  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:19.758338  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:20.183008  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:20.244376  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.244427  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.244447  826329 retry.go:31] will retry after 4.642616038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.258603  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.258946  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.757858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.758256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.757997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.758369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:22.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.757950  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.758369  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.257963  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.258271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.758124  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.758456  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.758513  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.811708  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:23.877239  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:23.877286  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:23.877305  826329 retry.go:31] will retry after 3.991513365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.257726  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.757814  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.757890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.887652  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:24.946807  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:24.946870  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.946894  826329 retry.go:31] will retry after 6.868435312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:25.258372  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.258452  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.258751  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.758655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.759159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.759287  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:26.257937  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.258011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.258320  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.757849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.758164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.258591  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.758609  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.869339  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:27.929619  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:27.929669  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:27.929689  826329 retry.go:31] will retry after 5.640751927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:28.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.258197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:28.258246  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.757900  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.257906  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.758680  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.758746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.759010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:30.759051  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:31.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.258120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.757934  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.815479  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:31.877679  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:31.877725  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:31.877744  826329 retry.go:31] will retry after 9.288265427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:32.258204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.258274  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.258579  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.758594  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.758959  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.257805  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.258256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:33.258316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:33.570705  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:33.628260  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:33.631756  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.631797  826329 retry.go:31] will retry after 7.380803559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.758003  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.758091  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.257826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.257908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.757933  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.757723  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:35.758156  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:36.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.257953  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.258310  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.758204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.758282  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.758636  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:37.758697  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:38.258444  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.258520  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.258964  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.758657  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.758988  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.258591  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.259009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.757689  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.757764  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.758032  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.257724  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.257806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.258168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:40.258225  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:40.757812  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.757892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.013670  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:41.072281  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.076192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.076223  826329 retry.go:31] will retry after 30.64284814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.166454  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:41.227404  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.227446  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.227466  826329 retry.go:31] will retry after 28.006603896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.258583  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.258655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.758793  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.758886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.759193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.257895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.258236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:42.258293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.758154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.758523  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.258386  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.258459  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.258782  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.758542  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.758614  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.758961  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.258683  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.258759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:44.259091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.757800  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.758206  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.258097  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.259164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.757651  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.757746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.758010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.257735  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.257815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.258117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.757885  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.757969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.758288  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.758347  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:47.258326  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.258400  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.258685  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.758684  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.758763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.759114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.257709  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.757752  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.758123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:49.258218  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.757765  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.758188  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.258204  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.258253  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.757903  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.757978  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.758301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.757965  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.758392  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.758279  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:54.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.257882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.757818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.757897  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.258277  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.757925  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.758403  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.258035  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.258362  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.258678  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.258763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.259088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.757900  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.757974  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.258215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:58.258269  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.758311  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.257792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.258100  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.757787  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.257846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.758031  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.758108  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.757962  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.257983  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.258055  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.258387  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.258456  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:02.757985  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.758059  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.258055  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.258125  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.258438  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.757882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.257989  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.258481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:04.758118  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.758201  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.758485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.258270  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.758448  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.758527  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.758934  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.257684  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.257772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.258049  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:06.758206  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.258726  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.258824  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.259215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.758011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.758271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.257849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:08.758228  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:09.234960  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:09.258398  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.258467  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.258726  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.299771  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:09.299811  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.299830  826329 retry.go:31] will retry after 22.917133282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.758561  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.758640  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.758995  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.258770  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.258868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.259197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.757838  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.758190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.257813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:11.258179  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:11.719678  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:11.758124  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.758203  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.758476  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.779600  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:11.783324  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:11.783357  826329 retry.go:31] will retry after 27.574784486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:12.257740  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.258104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.258219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:13.258272  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:13.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.757988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.257958  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.258037  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.258315  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:15.258360  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:15.757919  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.757879  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.257963  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.258036  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.258357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:17.258414  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.758272  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.758354  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.758668  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.258406  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.258487  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.258798  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.758471  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.758544  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.758891  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.258691  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.258772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.259134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.259190  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.757739  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.758088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.757870  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.757943  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.758290  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.757993  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.257852  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.258182  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:24.258220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.757878  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.758349  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.258345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.257811  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.258284  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.758040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.258252  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.258330  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.258588  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.758645  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.758735  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.759079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.758067  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.758108  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.757789  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.257875  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.257941  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.258210  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.757889  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.758308  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.257774  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.757714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.757784  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.758087  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.217681  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:32.258110  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.258497  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.272413  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:32.276021  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.276065  826329 retry.go:31] will retry after 31.830018043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.258151  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.258517  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.758451  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.258598  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.259035  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.758635  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.758714  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.759056  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.257714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.258111  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.758267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.257939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.757891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.258214  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.258289  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.258578  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:37.258623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.758354  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.758421  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.758674  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.258403  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.258497  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.258867  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.758486  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.758558  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.758906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.258694  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.258758  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.259030  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.259072  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.358376  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:39.412374  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416050  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416143  826329 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:33:39.758638  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.758720  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.759108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.757846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.757931  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.257809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.757977  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.758050  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.758393  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:42.258098  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.258182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.258488  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.758485  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.758557  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.758915  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.258576  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.258649  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.258992  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.757700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.757773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.758038  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.257757  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.258132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:44.258184  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.757809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.757999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.758336  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.258084  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.258468  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:46.258519  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.758126  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.758195  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.758462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.258480  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.258906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.758307  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.257842  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.758219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.758291  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:49.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.258184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.757922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.757790  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.257971  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.258282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:51.258346  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.757834  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.757908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.758182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.758452  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.258459  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.258900  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:53.258955  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.758700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.758780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.759083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.258123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.758170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:55.758182  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:56.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.757939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.758018  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.758340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.258337  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.258409  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.258677  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.758592  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:57.759063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:58.257674  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.257773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.757693  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.757771  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.758081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.258187  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.758199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.265698  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.265780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.266096  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:00.266143  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:00.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.757872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.258053  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.257892  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.258340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.758185  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.758273  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.758590  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:02.758643  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:03.258621  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.258702  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.757895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.758191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.106865  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:34:04.166273  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166323  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166403  826329 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:34:04.169502  826329 out.go:179] * Enabled addons: 
	I1208 00:34:04.171536  826329 addons.go:530] duration metric: took 1m56.935875389s for enable addons: enabled=[]
	I1208 00:34:04.258604  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.258682  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.259013  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.758662  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.758731  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.759011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:04.759062  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:05.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.758048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.758370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.257730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.258101  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.758131  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.758204  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.758570  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.258500  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.258586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.258950  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:07.259055  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:07.757997  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.758357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.257713  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.257788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.258063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:09.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:10.257921  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.258346  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.757735  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.757804  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.758062  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.757910  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:11.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:12.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.258391  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.757907  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.757979  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.258000  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.258079  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.757976  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.758046  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.758318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:14.258216  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:14.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.758229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.257940  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.258013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.258338  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:16.258395  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:16.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.758127  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.258701  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.258775  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.757896  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.758282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.257973  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.258048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.757762  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:18.758243  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.258352  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.758033  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.758409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.757890  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.757981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.758323  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:20.758384  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:21.257944  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.258010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.758322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.257850  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.257925  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.258270  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.758019  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.758365  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:22.758408  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:23.258071  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.258151  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.258491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.758281  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.758363  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.758707  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.258477  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.258561  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.759183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.759247  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:25.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.258000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.757806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.758120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.258248  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.757971  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.758380  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.258327  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.258401  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.258666  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:27.258716  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.758723  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.758798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.759103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.258027  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.258370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.758085  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.758508  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:29.758566  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:30.258264  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.258340  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.258608  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.758360  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.758437  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.758793  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.258627  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.258701  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.259047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.757815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.758076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.257780  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:32.258235  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:32.758097  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.758176  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.258283  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.258362  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.258621  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.758421  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.758509  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.758874  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.258773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.259148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:34.259210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.757843  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.757921  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.757995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.758360  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.257977  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.258049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.757866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.758233  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:37.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.257964  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.258296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.758129  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.758200  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.758490  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.258191  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.258269  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.758454  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.758534  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.758898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.758959  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:39.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.258627  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.258916  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.758708  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.759139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.257796  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.757783  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.758212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:41.258249  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:41.757913  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.758308  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.758011  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.758449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.258150  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.258227  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.258566  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:43.258632  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.758358  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.758430  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.758722  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.258546  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.259073  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.757871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.257935  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.258485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.758673  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.758756  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.759202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:46.257864  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.257946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.258291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.758013  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.258513  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.258598  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.259004  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.757974  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.758047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.257839  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.258125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:48.258175  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.757743  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.757816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.758138  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.257906  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.758137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.257875  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:50.258267  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.757934  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.758014  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.758361  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.258044  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.258119  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.258431  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.758821  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.758917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.759213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.757986  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.758060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.758375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:52.758428  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:53.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.758227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.757810  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.757886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.257839  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.257917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:55.258313  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:55.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.757796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.757854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.758141  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:57.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:57.758246  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.758647  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.258478  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.258560  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.258910  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.257905  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.258259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.758063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.758436  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:59.758494  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:00.270583  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.271106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.271544  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.758373  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.758448  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.758792  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.258597  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.259052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.257942  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.258019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.258319  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:02.258369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:02.758254  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.758335  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.758657  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.258485  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.258576  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.258926  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.757769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.258084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:04.758220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:05.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.257988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.258274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.257890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.258218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.758268  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:07.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.258264  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.258524  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.758503  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.758579  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.758911  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.258711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.258788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.259165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.758114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:09.258314  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:09.757867  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.257728  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.758154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.257828  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.257901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:11.758292  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.758010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.758331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.257734  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.258128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.757740  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.758156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.257879  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.257958  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:14.258372  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:14.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.258226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.757850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:16.758262  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:17.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.758126  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.258225  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.757982  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.758084  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:18.758496  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:19.258078  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.258148  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.258462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.758152  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.257773  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.257847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.258174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.757731  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.758079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:21.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:21.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.758255  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.258007  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.258298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.757958  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.758029  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.257782  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.757721  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.757792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:23.758157  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.257916  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.757747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.757838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.257741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.258153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:25.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.257792  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.257867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.258190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.757716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.757791  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.758047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.257747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.257826  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.258159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.757938  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.758339  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:27.758399  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:28.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.257817  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.258135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.758185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.257754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.757884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.758247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.258359  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:30.258416  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:30.758069  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.758447  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.257716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.757859  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.258342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.758262  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.758582  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:32.758623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.258445  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.258519  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.258864  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.758759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.759120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.757780  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.257854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.258302  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:35.757946  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.758342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.258034  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.258106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.758092  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.758170  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.758498  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.258371  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.258441  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.258740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.258804  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:37.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.758737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.759093  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.758009  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.758085  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.758354  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.258253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.758008  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.758083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.758427  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:39.758481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.258151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.757846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.758147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.258244  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.757920  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.757992  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.758263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.257833  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.258385  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.258459  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.758115  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.758189  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.758495  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.258231  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.258593  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.758356  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.758433  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.758767  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.258451  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.258526  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.258817  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.258887  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:44.758589  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.758661  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.758935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.257830  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.757933  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.257995  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.258070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.258330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.757844  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.758227  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.757930  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.758268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.757753  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.258251  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.758020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.758330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.258077  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.258159  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.258484  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.757837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.757936  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.758281  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.758053  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.758433  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.258161  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.258558  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.758318  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.758393  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.758646  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.758686  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.258483  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.258562  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.258917  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.758694  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.758792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.759186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.257832  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.258147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.757780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.758109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.257711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.258202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.257884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.257966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.758093  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.258229  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.258576  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.258619  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:58.758339  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.758413  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.758719  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.258566  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.258656  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.259028  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.757811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.758074  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.258301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.757822  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.757896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:00.758231  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.257745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.258119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.757848  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.758161  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.257756  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.758045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:02.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:03.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.757799  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.758980  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:36:04.257702  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.258057  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.758149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.257856  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:05.757874  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.757952  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.758274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.257951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.258331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.758228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.258156  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.258257  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.258603  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:07.258657  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:07.758639  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.758722  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.759070  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.257829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.757812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.257802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.257878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.758023  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:09.758454  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:10.258096  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.258168  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.757867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.257926  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.258015  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.758043  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.758118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:12.258271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:12.758147  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.758239  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.758564  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.258372  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.258650  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.758403  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.758476  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.758795  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.258438  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.258516  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.258865  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:14.258923  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:14.758558  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.758632  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.257698  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.257781  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.258012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.258318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.757852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.758196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:16.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:17.257965  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.258040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.757949  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.257775  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.257850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.757883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.258195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:19.757899  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.257800  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.258270  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:21.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.258048  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.258121  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.757988  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.758096  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.758420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:23.258320  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:23.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.758051  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.758371  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.258081  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.258509  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.758321  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.758398  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.758744  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.258469  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.258537  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.258876  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:25.258924  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:25.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.758727  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.759090  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.757942  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.758194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.257841  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.257927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.758332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:27.758386  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:28.257969  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.258045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.758107  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.757822  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.758078  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.257824  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.257913  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.258331  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:30.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.757915  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.257869  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.257937  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.257781  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.757940  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:32.758305  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.258196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.758193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.257750  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.757815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.757887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.257918  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.257997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.258317  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.258379  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:35.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.757819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.758135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.257783  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.258193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.758166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.258659  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.258733  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.259083  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:37.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.758024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.758345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.757932  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.758013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.758289  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.757952  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:39.758433  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.257793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.258042  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.257744  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:42.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.758448  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.757926  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.258047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.258465  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:44.757755  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.757827  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.257829  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.257930  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.758253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.757828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:46.758229  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.257985  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.258332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.757967  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.758296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.257872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.757878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:48.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:49.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.757898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.758139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:51.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.257880  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.258144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:51.258193  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:51.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.758200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.257870  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.258287  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.758014  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.758414  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:53.258138  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.258234  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.258594  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:53.258654  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:53.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.257895  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.257969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.258267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.758150  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:55.758195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:56.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.258194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:56.757733  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.758064  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.258687  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.258769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.259122  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.757909  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.757984  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:57.758349  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:58.257827  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.257904  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:58.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.758197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.257858  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.257940  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.758280  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:00.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.258083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.258409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:00.258457  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:00.758379  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.758466  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.758803  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.258644  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.258737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.259037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:02.758316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:03.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.258232  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:03.757961  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.758042  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.758415  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.258085  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.258154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.258494  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.758211  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.758302  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.758664  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:04.758720  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:05.258496  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.258572  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.258935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:05.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.757745  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.758009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.258149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.758260  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:07.258197  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.258266  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.258533  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:07.258574  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:07.758487  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.758564  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.758919  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.258731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.258806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.259157  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.757712  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.757783  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.758052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.758285  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:09.758354  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:10.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.257812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.258068  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:10.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.758172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.758165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:12.257867  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:12.258328  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:12.758227  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.758306  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.758623  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.258376  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.258454  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.258723  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.758551  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.758624  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.758979  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.757823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:14.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:15.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:15.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.758236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.257917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.258276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:16.758276  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:17.257980  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.258060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:17.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.758343  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.258231  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.757795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.757884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.758230  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:19.257736  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:19.258185  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:19.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.257828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.757722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.757789  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.758063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:21.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:21.258238  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:21.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.758000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.257738  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.257820  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.758012  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.758097  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.758430  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.257876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.258177  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.757901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:23.758293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:24.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:24.757779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.758189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.258103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:26.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.258263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:26.258318  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:26.757964  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.758030  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.758273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.258297  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.258369  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.258691  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.758719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.758793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.759134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.257821  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:28.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:29.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:29.757719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.757786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.758037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.258173  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.757761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.758153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:31.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.257787  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.258040  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:31.258078  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:31.757746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.757831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.257904  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.758153  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.758406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:33.257779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.258158  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:33.258205  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:33.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.757959  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.257990  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.258252  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.758130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.257853  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.258198  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:35.258259  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:35.757729  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.757808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.758125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.257840  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:37.258028  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.258098  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.258344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:37.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:37.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.758350  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.757892  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:39.758261  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:40.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.257976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.258247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:40.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.758250  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.757732  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.758046  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:42.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.258257  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:42.258317  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.758145  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.758527  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.258368  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.258629  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.758381  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.758456  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:44.258642  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.258728  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.259104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:44.259162  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:44.757666  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.757747  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.758033  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.258118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.258898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.758751  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.759069  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:46.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.258765  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.259139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:46.259195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:46.757764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.758163  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.258575  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.757955  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.758294  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.757898  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:48.758358  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:49.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.258126  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:49.757824  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.757899  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.258201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.757976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:51.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.257834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:51.258245  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:51.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.758176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.257907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.757998  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.758067  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.758400  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.257761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.257831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.258156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.758051  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:53.758091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:54.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:54.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.258107  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.757840  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.758276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:55.758329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:56.257991  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.258063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:56.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.758080  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.257909  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.258228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.757928  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.758314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:57.758371  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:58.257725  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:58.757817  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.758235  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.257927  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.258328  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.757914  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:00.257912  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.258367  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:00.258421  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:00.758080  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.758156  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.758491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.258328  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.258416  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.258737  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.758951  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.257691  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.257768  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.258118  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:02.758341  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:03.258024  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.258103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.258449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:03.758162  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.758236  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.758778  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.258999  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.757698  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:05.257820  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:05.258295  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:05.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.257819  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.757775  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:07.262532  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.262623  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.263011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:07.263063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:07.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:08.257967  826329 node_ready.go:38] duration metric: took 6m0.00040399s for node "functional-525396" to be "Ready" ...
	I1208 00:38:08.261085  826329 out.go:203] 
	W1208 00:38:08.263874  826329 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:38:08.263896  826329 out.go:285] * 
	W1208 00:38:08.266040  826329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:38:08.269117  826329 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714298717Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714306414Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714311813Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714317286Z" level=info msg="RDT not available in the host system"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.714334664Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715207312Z" level=info msg="Conmon does support the --sync option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715239272Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715254541Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.715984501Z" level=info msg="Conmon does support the --sync option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716004177Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716135961Z" level=info msg="Updated default CNI network name to "
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.716848903Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.717256324Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.717330195Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759894658Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759928759Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.759969326Z" level=info msg="Create NRI interface"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760371471Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760401198Z" level=info msg="runtime interface created"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760416583Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760453055Z" level=info msg="runtime interface starting up..."
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.7604662Z" level=info msg="starting plugins..."
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760483997Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:32:05 functional-525396 crio[5366]: time="2025-12-08T00:32:05.760558443Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:32:05 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:12.743057    8757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:12.743880    8757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:12.745517    8757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:12.745830    8757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:12.747354    8757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 23:24] overlayfs: idmapped layers are currently not supported
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:38:12 up  5:20,  0 user,  load average: 0.19, 0.24, 0.67
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:38:10 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 08 00:38:11 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:11 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:11 functional-525396 kubelet[8633]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:11 functional-525396 kubelet[8633]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:11 functional-525396 kubelet[8633]: E1208 00:38:11.071438    8633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 08 00:38:11 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:11 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:11 functional-525396 kubelet[8667]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:11 functional-525396 kubelet[8667]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:11 functional-525396 kubelet[8667]: E1208 00:38:11.758729    8667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:11 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:12 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 08 00:38:12 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:12 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:12 functional-525396 kubelet[8712]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:12 functional-525396 kubelet[8712]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:12 functional-525396 kubelet[8712]: E1208 00:38:12.570831    8712 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:12 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:12 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (381.760749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 kubectl -- --context functional-525396 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 kubectl -- --context functional-525396 get pods: exit status 1 (127.565926ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-525396 kubectl -- --context functional-525396 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (318.496579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 logs -n 25: (1.091537057s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-714395 image ls --format short --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format yaml --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format json --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format table --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh     │ functional-714395 ssh pgrep buildkitd                                                                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image   │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                            │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls                                                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete  │ -p functional-714395                                                                                                                              │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start   │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start   │ -p functional-525396 --alsologtostderr -v=8                                                                                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:latest                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add minikube-local-cache-test:functional-525396                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache delete minikube-local-cache-test:functional-525396                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl images                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ cache   │ functional-525396 cache reload                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ kubectl │ functional-525396 kubectl -- --context functional-525396 get pods                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:32:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:32:02.748489  826329 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:32:02.748673  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748687  826329 out.go:374] Setting ErrFile to fd 2...
	I1208 00:32:02.748692  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748975  826329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:32:02.749379  826329 out.go:368] Setting JSON to false
	I1208 00:32:02.750240  826329 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18855,"bootTime":1765135068,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:32:02.750321  826329 start.go:143] virtualization:  
	I1208 00:32:02.755521  826329 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:32:02.759227  826329 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:32:02.759498  826329 notify.go:221] Checking for updates...
	I1208 00:32:02.765171  826329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:32:02.768668  826329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:02.771686  826329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:32:02.774728  826329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:32:02.777727  826329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:32:02.781794  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:02.781971  826329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:32:02.823053  826329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:32:02.823186  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.879429  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.869702269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.879546  826329 docker.go:319] overlay module found
	I1208 00:32:02.884410  826329 out.go:179] * Using the docker driver based on existing profile
	I1208 00:32:02.887311  826329 start.go:309] selected driver: docker
	I1208 00:32:02.887330  826329 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.887447  826329 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:32:02.887565  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.942385  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.932846048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.942810  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:02.942902  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:02.942960  826329 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.948301  826329 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:32:02.951106  826329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:32:02.954049  826329 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:32:02.956917  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:02.956968  826329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:32:02.956999  826329 cache.go:65] Caching tarball of preloaded images
	I1208 00:32:02.957004  826329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:32:02.957092  826329 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:32:02.957103  826329 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:32:02.957210  826329 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:32:02.976499  826329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:32:02.976524  826329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:32:02.976543  826329 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:32:02.976579  826329 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:32:02.976652  826329 start.go:364] duration metric: took 48.116µs to acquireMachinesLock for "functional-525396"
	I1208 00:32:02.976674  826329 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:32:02.976683  826329 fix.go:54] fixHost starting: 
	I1208 00:32:02.976940  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:02.996203  826329 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:32:02.996234  826329 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:32:02.999434  826329 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:32:02.999477  826329 machine.go:94] provisionDockerMachine start ...
	I1208 00:32:02.999559  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.021375  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.021746  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.021762  826329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:32:03.174523  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.174550  826329 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:32:03.174616  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.192743  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.193067  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.193084  826329 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:32:03.356577  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.356704  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.375055  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.375394  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.375419  826329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:32:03.529767  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:32:03.529793  826329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:32:03.529822  826329 ubuntu.go:190] setting up certificates
	I1208 00:32:03.529839  826329 provision.go:84] configureAuth start
	I1208 00:32:03.529901  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:03.552219  826329 provision.go:143] copyHostCerts
	I1208 00:32:03.552258  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552298  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:32:03.552310  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552383  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:32:03.552464  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552480  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:32:03.552484  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552511  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:32:03.552550  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552566  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:32:03.552570  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552592  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:32:03.552642  826329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:32:03.707027  826329 provision.go:177] copyRemoteCerts
	I1208 00:32:03.707105  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:32:03.707150  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.724035  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:03.830514  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:32:03.830586  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:32:03.848126  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:32:03.848238  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:32:03.865293  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:32:03.865368  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:32:03.882781  826329 provision.go:87] duration metric: took 352.917637ms to configureAuth
	I1208 00:32:03.882808  826329 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:32:03.883086  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:03.883204  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.900405  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.900722  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.900745  826329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:32:04.247102  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:32:04.247132  826329 machine.go:97] duration metric: took 1.247646186s to provisionDockerMachine
	I1208 00:32:04.247143  826329 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:32:04.247156  826329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:32:04.247233  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:32:04.247291  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.269420  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.374672  826329 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:32:04.377926  826329 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:32:04.377948  826329 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:32:04.377953  826329 command_runner.go:130] > VERSION_ID="12"
	I1208 00:32:04.377958  826329 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:32:04.377964  826329 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:32:04.377968  826329 command_runner.go:130] > ID=debian
	I1208 00:32:04.377973  826329 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:32:04.377998  826329 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:32:04.378009  826329 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:32:04.378363  826329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:32:04.378386  826329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:32:04.378397  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:32:04.378453  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:32:04.378535  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:32:04.378546  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 00:32:04.378621  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:32:04.378628  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> /etc/test/nested/copy/791807/hosts
	I1208 00:32:04.378672  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:32:04.386632  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:04.404202  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:32:04.421545  826329 start.go:296] duration metric: took 174.385446ms for postStartSetup
	I1208 00:32:04.421649  826329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:32:04.421695  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.439941  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.543929  826329 command_runner.go:130] > 13%
	I1208 00:32:04.544005  826329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:32:04.548692  826329 command_runner.go:130] > 169G
	I1208 00:32:04.548719  826329 fix.go:56] duration metric: took 1.572034198s for fixHost
	I1208 00:32:04.548730  826329 start.go:83] releasing machines lock for "functional-525396", held for 1.572067364s
	I1208 00:32:04.548856  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:04.565574  826329 ssh_runner.go:195] Run: cat /version.json
	I1208 00:32:04.565638  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.565923  826329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:32:04.565984  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.584847  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.600519  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.771794  826329 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:32:04.774495  826329 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:32:04.774657  826329 ssh_runner.go:195] Run: systemctl --version
	I1208 00:32:04.780874  826329 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:32:04.780917  826329 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:32:04.781367  826329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:32:04.818112  826329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:32:04.822491  826329 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:32:04.822532  826329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:32:04.822595  826329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:32:04.830492  826329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:32:04.830518  826329 start.go:496] detecting cgroup driver to use...
	I1208 00:32:04.830579  826329 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:32:04.830661  826329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:32:04.846467  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:32:04.859999  826329 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:32:04.860093  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:32:04.876040  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:32:04.889316  826329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:32:04.999380  826329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:32:05.135529  826329 docker.go:234] disabling docker service ...
	I1208 00:32:05.135652  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:32:05.150887  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:32:05.164082  826329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:32:05.274195  826329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:32:05.386139  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:32:05.399321  826329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:32:05.411741  826329 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 00:32:05.412925  826329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:32:05.413007  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.421375  826329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:32:05.421462  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.430145  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.438751  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.447666  826329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:32:05.455572  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.464290  826329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.472537  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.481189  826329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:32:05.487727  826329 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:32:05.488614  826329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:32:05.496261  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:05.603146  826329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:32:05.769023  826329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:32:05.769169  826329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:32:05.773391  826329 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 00:32:05.773452  826329 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:32:05.773473  826329 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1208 00:32:05.773494  826329 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:05.773524  826329 command_runner.go:130] > Access: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773553  826329 command_runner.go:130] > Modify: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773581  826329 command_runner.go:130] > Change: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773598  826329 command_runner.go:130] >  Birth: -
	I1208 00:32:05.774292  826329 start.go:564] Will wait 60s for crictl version
	I1208 00:32:05.774387  826329 ssh_runner.go:195] Run: which crictl
	I1208 00:32:05.778688  826329 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:32:05.779547  826329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:32:05.803509  826329 command_runner.go:130] > Version:  0.1.0
	I1208 00:32:05.803790  826329 command_runner.go:130] > RuntimeName:  cri-o
	I1208 00:32:05.804036  826329 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1208 00:32:05.804294  826329 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:32:05.806608  826329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:32:05.806739  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.840244  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.840321  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.840340  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.840361  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.840391  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.840415  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.840434  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.840452  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.840471  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.840498  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.840519  826329 command_runner.go:130] >      static
	I1208 00:32:05.840536  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.840553  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.840567  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.840593  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.840612  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.840629  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.840647  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.840664  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.840690  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.841800  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.872333  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.872357  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.872369  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.872376  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.872381  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.872385  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.872389  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.872395  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.872399  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.872408  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.872412  826329 command_runner.go:130] >      static
	I1208 00:32:05.872422  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.872437  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.872444  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.872448  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.872451  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.872457  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.872463  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.872467  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.872480  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.877414  826329 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:32:05.880269  826329 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:32:05.896780  826329 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:32:05.900764  826329 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:32:05.900873  826329 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:32:05.900985  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:05.901051  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.935654  826329 command_runner.go:130] > {
	I1208 00:32:05.935679  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.935684  826329 command_runner.go:130] >     {
	I1208 00:32:05.935694  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.935699  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935705  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.935708  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935713  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935724  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.935736  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.935743  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935756  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.935763  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935768  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935772  826329 command_runner.go:130] >     },
	I1208 00:32:05.935775  826329 command_runner.go:130] >     {
	I1208 00:32:05.935781  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.935787  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935793  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.935796  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935800  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935810  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.935821  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.935825  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935829  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.935836  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935845  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935853  826329 command_runner.go:130] >     },
	I1208 00:32:05.935857  826329 command_runner.go:130] >     {
	I1208 00:32:05.935864  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.935870  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935876  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.935879  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935885  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935894  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.935905  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.935908  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935912  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.935917  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.935923  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935927  826329 command_runner.go:130] >     },
	I1208 00:32:05.935932  826329 command_runner.go:130] >     {
	I1208 00:32:05.935938  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.935946  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935956  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.935962  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935967  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935975  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.935986  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.935990  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935994  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.936001  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936006  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936011  826329 command_runner.go:130] >       },
	I1208 00:32:05.936021  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936028  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936031  826329 command_runner.go:130] >     },
	I1208 00:32:05.936034  826329 command_runner.go:130] >     {
	I1208 00:32:05.936041  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.936048  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936053  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.936057  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936063  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936072  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.936083  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.936087  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936091  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.936095  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936101  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936105  826329 command_runner.go:130] >       },
	I1208 00:32:05.936110  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936116  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936119  826329 command_runner.go:130] >     },
	I1208 00:32:05.936122  826329 command_runner.go:130] >     {
	I1208 00:32:05.936129  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.936136  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936143  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.936152  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936160  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936169  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.936179  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.936184  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936189  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.936195  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936199  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936203  826329 command_runner.go:130] >       },
	I1208 00:32:05.936207  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936215  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936219  826329 command_runner.go:130] >     },
	I1208 00:32:05.936222  826329 command_runner.go:130] >     {
	I1208 00:32:05.936228  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.936235  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936240  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.936244  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936255  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936263  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.936271  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.936277  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936282  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.936288  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936292  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936295  826329 command_runner.go:130] >     },
	I1208 00:32:05.936298  826329 command_runner.go:130] >     {
	I1208 00:32:05.936306  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.936313  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936318  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.936322  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936326  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936336  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.936362  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.936372  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936377  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.936387  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936391  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936395  826329 command_runner.go:130] >       },
	I1208 00:32:05.936406  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936410  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936414  826329 command_runner.go:130] >     },
	I1208 00:32:05.936417  826329 command_runner.go:130] >     {
	I1208 00:32:05.936424  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.936432  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936437  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.936441  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936445  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936455  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.936465  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.936469  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936473  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.936483  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936487  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.936490  826329 command_runner.go:130] >       },
	I1208 00:32:05.936500  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936504  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.936507  826329 command_runner.go:130] >     }
	I1208 00:32:05.936510  826329 command_runner.go:130] >   ]
	I1208 00:32:05.936513  826329 command_runner.go:130] > }
	I1208 00:32:05.936690  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.936705  826329 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:32:05.936757  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.965491  826329 command_runner.go:130] > {
	I1208 00:32:05.965510  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.965515  826329 command_runner.go:130] >     {
	I1208 00:32:05.965525  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.965542  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965549  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.965553  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965557  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965584  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.965593  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.965596  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965600  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.965604  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965614  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965618  826329 command_runner.go:130] >     },
	I1208 00:32:05.965620  826329 command_runner.go:130] >     {
	I1208 00:32:05.965627  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.965630  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965635  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.965639  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965642  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965650  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.965659  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.965662  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965666  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.965669  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965675  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965679  826329 command_runner.go:130] >     },
	I1208 00:32:05.965682  826329 command_runner.go:130] >     {
	I1208 00:32:05.965689  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.965692  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965700  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.965704  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965708  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965715  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.965723  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.965726  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965733  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.965738  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.965741  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965744  826329 command_runner.go:130] >     },
	I1208 00:32:05.965747  826329 command_runner.go:130] >     {
	I1208 00:32:05.965754  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.965758  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965763  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.965768  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965772  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965779  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.965786  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.965789  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965793  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.965796  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965800  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965803  826329 command_runner.go:130] >       },
	I1208 00:32:05.965811  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965815  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965818  826329 command_runner.go:130] >     },
	I1208 00:32:05.965821  826329 command_runner.go:130] >     {
	I1208 00:32:05.965827  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.965831  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965841  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.965844  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965848  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965859  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.965867  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.965870  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965874  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.965877  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965881  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965884  826329 command_runner.go:130] >       },
	I1208 00:32:05.965891  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965895  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965898  826329 command_runner.go:130] >     },
	I1208 00:32:05.965901  826329 command_runner.go:130] >     {
	I1208 00:32:05.965907  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.965911  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965917  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.965920  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965924  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965932  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.965944  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.965947  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965951  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.965954  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965958  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965961  826329 command_runner.go:130] >       },
	I1208 00:32:05.965964  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965968  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965971  826329 command_runner.go:130] >     },
	I1208 00:32:05.965974  826329 command_runner.go:130] >     {
	I1208 00:32:05.965980  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.965984  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965989  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.965992  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965995  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966003  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.966013  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.966016  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966020  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.966023  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966027  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966030  826329 command_runner.go:130] >     },
	I1208 00:32:05.966033  826329 command_runner.go:130] >     {
	I1208 00:32:05.966042  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.966046  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966051  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.966054  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966058  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966066  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.966082  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.966086  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966090  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.966094  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966097  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.966100  826329 command_runner.go:130] >       },
	I1208 00:32:05.966104  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966109  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966112  826329 command_runner.go:130] >     },
	I1208 00:32:05.966117  826329 command_runner.go:130] >     {
	I1208 00:32:05.966124  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.966127  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966131  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.966136  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966140  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966149  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.966156  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.966160  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966163  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.966167  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966171  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.966173  826329 command_runner.go:130] >       },
	I1208 00:32:05.966177  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966180  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.966183  826329 command_runner.go:130] >     }
	I1208 00:32:05.966186  826329 command_runner.go:130] >   ]
	I1208 00:32:05.966189  826329 command_runner.go:130] > }
	I1208 00:32:05.968541  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.968564  826329 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:32:05.968572  826329 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:32:05.968676  826329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:32:05.968759  826329 ssh_runner.go:195] Run: crio config
	I1208 00:32:06.017314  826329 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 00:32:06.017338  826329 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 00:32:06.017347  826329 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 00:32:06.017350  826329 command_runner.go:130] > #
	I1208 00:32:06.017357  826329 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 00:32:06.017363  826329 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 00:32:06.017370  826329 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 00:32:06.017378  826329 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 00:32:06.017384  826329 command_runner.go:130] > # reload'.
	I1208 00:32:06.017391  826329 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 00:32:06.017404  826329 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 00:32:06.017411  826329 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 00:32:06.017417  826329 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 00:32:06.017423  826329 command_runner.go:130] > [crio]
	I1208 00:32:06.017429  826329 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 00:32:06.017434  826329 command_runner.go:130] > # containers images, in this directory.
	I1208 00:32:06.017704  826329 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 00:32:06.017722  826329 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 00:32:06.017729  826329 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1208 00:32:06.017738  826329 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1208 00:32:06.017898  826329 command_runner.go:130] > # imagestore = ""
	I1208 00:32:06.017914  826329 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 00:32:06.017922  826329 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 00:32:06.018164  826329 command_runner.go:130] > # storage_driver = "overlay"
	I1208 00:32:06.018180  826329 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 00:32:06.018187  826329 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 00:32:06.018278  826329 command_runner.go:130] > # storage_option = [
	I1208 00:32:06.018455  826329 command_runner.go:130] > # ]
	I1208 00:32:06.018487  826329 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 00:32:06.018500  826329 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 00:32:06.018675  826329 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 00:32:06.018694  826329 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 00:32:06.018706  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 00:32:06.018719  826329 command_runner.go:130] > # always happen on a node reboot
	I1208 00:32:06.018990  826329 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 00:32:06.019024  826329 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 00:32:06.019035  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 00:32:06.019041  826329 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 00:32:06.019224  826329 command_runner.go:130] > # version_file_persist = ""
	I1208 00:32:06.019243  826329 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 00:32:06.019258  826329 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 00:32:06.019484  826329 command_runner.go:130] > # internal_wipe = true
	I1208 00:32:06.019500  826329 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1208 00:32:06.019507  826329 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1208 00:32:06.019754  826329 command_runner.go:130] > # internal_repair = true
	I1208 00:32:06.019769  826329 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 00:32:06.019785  826329 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 00:32:06.019793  826329 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 00:32:06.020120  826329 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 00:32:06.020138  826329 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 00:32:06.020143  826329 command_runner.go:130] > [crio.api]
	I1208 00:32:06.020148  826329 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 00:32:06.020346  826329 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 00:32:06.020366  826329 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 00:32:06.020581  826329 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 00:32:06.020605  826329 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 00:32:06.020611  826329 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 00:32:06.020863  826329 command_runner.go:130] > # stream_port = "0"
	I1208 00:32:06.020878  826329 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 00:32:06.021158  826329 command_runner.go:130] > # stream_enable_tls = false
	I1208 00:32:06.021176  826329 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 00:32:06.021352  826329 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 00:32:06.021367  826329 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 00:32:06.021380  826329 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021617  826329 command_runner.go:130] > # stream_tls_cert = ""
	I1208 00:32:06.021634  826329 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 00:32:06.021641  826329 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021794  826329 command_runner.go:130] > # stream_tls_key = ""
	I1208 00:32:06.021808  826329 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 00:32:06.021824  826329 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 00:32:06.021840  826329 command_runner.go:130] > # automatically pick up the changes.
	I1208 00:32:06.022038  826329 command_runner.go:130] > # stream_tls_ca = ""
	I1208 00:32:06.022075  826329 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022282  826329 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 00:32:06.022297  826329 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022560  826329 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 00:32:06.022581  826329 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 00:32:06.022589  826329 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 00:32:06.022596  826329 command_runner.go:130] > [crio.runtime]
	I1208 00:32:06.022603  826329 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 00:32:06.022613  826329 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 00:32:06.022618  826329 command_runner.go:130] > # "nofile=1024:2048"
	I1208 00:32:06.022627  826329 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 00:32:06.022736  826329 command_runner.go:130] > # default_ulimits = [
	I1208 00:32:06.022966  826329 command_runner.go:130] > # ]
	I1208 00:32:06.022982  826329 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 00:32:06.023192  826329 command_runner.go:130] > # no_pivot = false
	I1208 00:32:06.023203  826329 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 00:32:06.023210  826329 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 00:32:06.023435  826329 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 00:32:06.023449  826329 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 00:32:06.023455  826329 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 00:32:06.023463  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023655  826329 command_runner.go:130] > # conmon = ""
	I1208 00:32:06.023668  826329 command_runner.go:130] > # Cgroup setting for conmon
	I1208 00:32:06.023697  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 00:32:06.023812  826329 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 00:32:06.023826  826329 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 00:32:06.023831  826329 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 00:32:06.023839  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023982  826329 command_runner.go:130] > # conmon_env = [
	I1208 00:32:06.024123  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024147  826329 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 00:32:06.024153  826329 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 00:32:06.024161  826329 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 00:32:06.024313  826329 command_runner.go:130] > # default_env = [
	I1208 00:32:06.024407  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024424  826329 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 00:32:06.024439  826329 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1208 00:32:06.024689  826329 command_runner.go:130] > # selinux = false
	I1208 00:32:06.024713  826329 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 00:32:06.024722  826329 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1208 00:32:06.024727  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.024963  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.024977  826329 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1208 00:32:06.024983  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025171  826329 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1208 00:32:06.025185  826329 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 00:32:06.025199  826329 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 00:32:06.025214  826329 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 00:32:06.025222  826329 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 00:32:06.025227  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025459  826329 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 00:32:06.025474  826329 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 00:32:06.025479  826329 command_runner.go:130] > # the cgroup blockio controller.
	I1208 00:32:06.025701  826329 command_runner.go:130] > # blockio_config_file = ""
	I1208 00:32:06.025716  826329 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1208 00:32:06.025721  826329 command_runner.go:130] > # blockio parameters.
	I1208 00:32:06.025998  826329 command_runner.go:130] > # blockio_reload = false
	I1208 00:32:06.026018  826329 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 00:32:06.026025  826329 command_runner.go:130] > # irqbalance daemon.
	I1208 00:32:06.026221  826329 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 00:32:06.026241  826329 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1208 00:32:06.026249  826329 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1208 00:32:06.026257  826329 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1208 00:32:06.026494  826329 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1208 00:32:06.026510  826329 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 00:32:06.026517  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.026722  826329 command_runner.go:130] > # rdt_config_file = ""
	I1208 00:32:06.026753  826329 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 00:32:06.026902  826329 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 00:32:06.026919  826329 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 00:32:06.027125  826329 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 00:32:06.027138  826329 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 00:32:06.027163  826329 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 00:32:06.027177  826329 command_runner.go:130] > # will be added.
	I1208 00:32:06.027277  826329 command_runner.go:130] > # default_capabilities = [
	I1208 00:32:06.027581  826329 command_runner.go:130] > # 	"CHOWN",
	I1208 00:32:06.027682  826329 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 00:32:06.027912  826329 command_runner.go:130] > # 	"FSETID",
	I1208 00:32:06.028073  826329 command_runner.go:130] > # 	"FOWNER",
	I1208 00:32:06.028166  826329 command_runner.go:130] > # 	"SETGID",
	I1208 00:32:06.028351  826329 command_runner.go:130] > # 	"SETUID",
	I1208 00:32:06.028526  826329 command_runner.go:130] > # 	"SETPCAP",
	I1208 00:32:06.028680  826329 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 00:32:06.028802  826329 command_runner.go:130] > # 	"KILL",
	I1208 00:32:06.028996  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029019  826329 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 00:32:06.029028  826329 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 00:32:06.029301  826329 command_runner.go:130] > # add_inheritable_capabilities = false
	I1208 00:32:06.029326  826329 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 00:32:06.029333  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029338  826329 command_runner.go:130] > default_sysctls = [
	I1208 00:32:06.029464  826329 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1208 00:32:06.029477  826329 command_runner.go:130] > ]
	I1208 00:32:06.029483  826329 command_runner.go:130] > # List of devices on the host that a
	I1208 00:32:06.029491  826329 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 00:32:06.029495  826329 command_runner.go:130] > # allowed_devices = [
	I1208 00:32:06.029499  826329 command_runner.go:130] > # 	"/dev/fuse",
	I1208 00:32:06.029507  826329 command_runner.go:130] > # 	"/dev/net/tun",
	I1208 00:32:06.029726  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029756  826329 command_runner.go:130] > # List of additional devices. specified as
	I1208 00:32:06.029769  826329 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 00:32:06.029775  826329 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 00:32:06.029782  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029898  826329 command_runner.go:130] > # additional_devices = [
	I1208 00:32:06.029911  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029918  826329 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 00:32:06.029922  826329 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 00:32:06.030014  826329 command_runner.go:130] > # 	"/etc/cdi",
	I1208 00:32:06.030033  826329 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 00:32:06.030037  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030045  826329 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 00:32:06.030051  826329 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 00:32:06.030058  826329 command_runner.go:130] > # Defaults to false.
	I1208 00:32:06.030179  826329 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 00:32:06.030194  826329 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 00:32:06.030201  826329 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 00:32:06.030206  826329 command_runner.go:130] > # hooks_dir = [
	I1208 00:32:06.030462  826329 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 00:32:06.030539  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030554  826329 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 00:32:06.030561  826329 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 00:32:06.030592  826329 command_runner.go:130] > # its default mounts from the following two files:
	I1208 00:32:06.030598  826329 command_runner.go:130] > #
	I1208 00:32:06.030608  826329 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 00:32:06.030631  826329 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 00:32:06.030642  826329 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 00:32:06.030646  826329 command_runner.go:130] > #
	I1208 00:32:06.030658  826329 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 00:32:06.030668  826329 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 00:32:06.030675  826329 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 00:32:06.030680  826329 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 00:32:06.030684  826329 command_runner.go:130] > #
	I1208 00:32:06.030688  826329 command_runner.go:130] > # default_mounts_file = ""
	I1208 00:32:06.030697  826329 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 00:32:06.030710  826329 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 00:32:06.030795  826329 command_runner.go:130] > # pids_limit = -1
	I1208 00:32:06.030811  826329 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 00:32:06.030858  826329 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 00:32:06.030867  826329 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 00:32:06.030881  826329 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 00:32:06.030886  826329 command_runner.go:130] > # log_size_max = -1
	I1208 00:32:06.030903  826329 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1208 00:32:06.031086  826329 command_runner.go:130] > # log_to_journald = false
	I1208 00:32:06.031102  826329 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 00:32:06.031167  826329 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 00:32:06.031181  826329 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 00:32:06.031241  826329 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 00:32:06.031258  826329 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 00:32:06.031327  826329 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 00:32:06.031335  826329 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 00:32:06.031339  826329 command_runner.go:130] > # read_only = false
	I1208 00:32:06.031345  826329 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 00:32:06.031377  826329 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 00:32:06.031383  826329 command_runner.go:130] > # live configuration reload.
	I1208 00:32:06.031388  826329 command_runner.go:130] > # log_level = "info"
	I1208 00:32:06.031397  826329 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 00:32:06.031408  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.031412  826329 command_runner.go:130] > # log_filter = ""
	I1208 00:32:06.031419  826329 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031430  826329 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 00:32:06.031434  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031452  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031456  826329 command_runner.go:130] > # uid_mappings = ""
	I1208 00:32:06.031462  826329 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031468  826329 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 00:32:06.031472  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031482  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031553  826329 command_runner.go:130] > # gid_mappings = ""
	I1208 00:32:06.031569  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 00:32:06.031632  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031648  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031656  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031742  826329 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 00:32:06.031759  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 00:32:06.031785  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031798  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031807  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.032017  826329 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 00:32:06.032056  826329 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 00:32:06.032071  826329 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 00:32:06.032077  826329 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 00:32:06.032099  826329 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 00:32:06.032106  826329 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 00:32:06.032112  826329 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 00:32:06.032205  826329 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 00:32:06.032267  826329 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 00:32:06.032278  826329 command_runner.go:130] > # drop_infra_ctr = true
	I1208 00:32:06.032285  826329 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 00:32:06.032292  826329 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 00:32:06.032307  826329 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 00:32:06.032340  826329 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 00:32:06.032356  826329 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1208 00:32:06.032371  826329 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1208 00:32:06.032378  826329 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1208 00:32:06.032384  826329 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1208 00:32:06.032394  826329 command_runner.go:130] > # shared_cpuset = ""
	I1208 00:32:06.032400  826329 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 00:32:06.032411  826329 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 00:32:06.032448  826329 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 00:32:06.032463  826329 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 00:32:06.032467  826329 command_runner.go:130] > # pinns_path = ""
	I1208 00:32:06.032473  826329 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1208 00:32:06.032479  826329 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1208 00:32:06.032487  826329 command_runner.go:130] > # enable_criu_support = true
	I1208 00:32:06.032493  826329 command_runner.go:130] > # Enable/disable the generation of the container,
	I1208 00:32:06.032500  826329 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1208 00:32:06.032732  826329 command_runner.go:130] > # enable_pod_events = false
	I1208 00:32:06.032748  826329 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 00:32:06.032827  826329 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1208 00:32:06.032846  826329 command_runner.go:130] > # default_runtime = "crun"
	I1208 00:32:06.032871  826329 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 00:32:06.032889  826329 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 00:32:06.032901  826329 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1208 00:32:06.032911  826329 command_runner.go:130] > # creation as a file is not desired either.
	I1208 00:32:06.032919  826329 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 00:32:06.032929  826329 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 00:32:06.032938  826329 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 00:32:06.032974  826329 command_runner.go:130] > # ]
	I1208 00:32:06.033041  826329 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 00:32:06.033057  826329 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 00:32:06.033064  826329 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1208 00:32:06.033070  826329 command_runner.go:130] > # Each entry in the table should follow the format:
	I1208 00:32:06.033073  826329 command_runner.go:130] > #
	I1208 00:32:06.033106  826329 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1208 00:32:06.033112  826329 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1208 00:32:06.033117  826329 command_runner.go:130] > # runtime_type = "oci"
	I1208 00:32:06.033192  826329 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1208 00:32:06.033209  826329 command_runner.go:130] > # inherit_default_runtime = false
	I1208 00:32:06.033214  826329 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1208 00:32:06.033219  826329 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1208 00:32:06.033225  826329 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1208 00:32:06.033228  826329 command_runner.go:130] > # monitor_env = []
	I1208 00:32:06.033233  826329 command_runner.go:130] > # privileged_without_host_devices = false
	I1208 00:32:06.033237  826329 command_runner.go:130] > # allowed_annotations = []
	I1208 00:32:06.033263  826329 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1208 00:32:06.033276  826329 command_runner.go:130] > # no_sync_log = false
	I1208 00:32:06.033282  826329 command_runner.go:130] > # default_annotations = {}
	I1208 00:32:06.033376  826329 command_runner.go:130] > # stream_websockets = false
	I1208 00:32:06.033384  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.033433  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.033444  826329 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1208 00:32:06.033456  826329 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1208 00:32:06.033467  826329 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 00:32:06.033474  826329 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 00:32:06.033477  826329 command_runner.go:130] > #   in $PATH.
	I1208 00:32:06.033483  826329 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1208 00:32:06.033489  826329 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 00:32:06.033495  826329 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1208 00:32:06.033504  826329 command_runner.go:130] > #   state.
	I1208 00:32:06.033518  826329 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 00:32:06.033528  826329 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 00:32:06.033535  826329 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1208 00:32:06.033547  826329 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1208 00:32:06.033552  826329 command_runner.go:130] > #   the values from the default runtime on load time.
	I1208 00:32:06.033558  826329 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 00:32:06.033563  826329 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 00:32:06.033604  826329 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 00:32:06.033610  826329 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 00:32:06.033615  826329 command_runner.go:130] > #   The currently recognized values are:
	I1208 00:32:06.033697  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 00:32:06.033736  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 00:32:06.033745  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 00:32:06.033760  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 00:32:06.033770  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 00:32:06.033787  826329 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 00:32:06.033799  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1208 00:32:06.033811  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1208 00:32:06.033818  826329 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 00:32:06.033824  826329 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1208 00:32:06.033832  826329 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1208 00:32:06.033842  826329 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1208 00:32:06.033851  826329 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1208 00:32:06.033863  826329 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1208 00:32:06.033869  826329 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1208 00:32:06.033883  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1208 00:32:06.033892  826329 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1208 00:32:06.033896  826329 command_runner.go:130] > #   deprecated option "conmon".
	I1208 00:32:06.033903  826329 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1208 00:32:06.033908  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1208 00:32:06.033916  826329 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1208 00:32:06.033925  826329 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 00:32:06.033933  826329 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1208 00:32:06.033944  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1208 00:32:06.033955  826329 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1208 00:32:06.033959  826329 command_runner.go:130] > #   conmon-rs by using:
	I1208 00:32:06.033976  826329 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1208 00:32:06.033990  826329 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1208 00:32:06.033998  826329 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1208 00:32:06.034005  826329 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1208 00:32:06.034012  826329 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1208 00:32:06.034036  826329 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1208 00:32:06.034044  826329 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1208 00:32:06.034064  826329 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1208 00:32:06.034074  826329 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1208 00:32:06.034087  826329 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1208 00:32:06.034557  826329 command_runner.go:130] > #   when a machine crash happens.
	I1208 00:32:06.034567  826329 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1208 00:32:06.034582  826329 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1208 00:32:06.034589  826329 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1208 00:32:06.034594  826329 command_runner.go:130] > #   seccomp profile for the runtime.
	I1208 00:32:06.034680  826329 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1208 00:32:06.034713  826329 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1208 00:32:06.034720  826329 command_runner.go:130] > #
	I1208 00:32:06.034732  826329 command_runner.go:130] > # Using the seccomp notifier feature:
	I1208 00:32:06.034735  826329 command_runner.go:130] > #
	I1208 00:32:06.034742  826329 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1208 00:32:06.034749  826329 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1208 00:32:06.034762  826329 command_runner.go:130] > #
	I1208 00:32:06.034769  826329 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1208 00:32:06.034785  826329 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1208 00:32:06.034788  826329 command_runner.go:130] > #
	I1208 00:32:06.034795  826329 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1208 00:32:06.034799  826329 command_runner.go:130] > # feature.
	I1208 00:32:06.034802  826329 command_runner.go:130] > #
	I1208 00:32:06.034808  826329 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1208 00:32:06.034819  826329 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1208 00:32:06.034825  826329 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1208 00:32:06.034837  826329 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1208 00:32:06.034858  826329 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1208 00:32:06.034861  826329 command_runner.go:130] > #
	I1208 00:32:06.034867  826329 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1208 00:32:06.034878  826329 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1208 00:32:06.034881  826329 command_runner.go:130] > #
	I1208 00:32:06.034887  826329 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1208 00:32:06.034897  826329 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1208 00:32:06.034900  826329 command_runner.go:130] > #
	I1208 00:32:06.034906  826329 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1208 00:32:06.034916  826329 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1208 00:32:06.034920  826329 command_runner.go:130] > # limitation.
	I1208 00:32:06.034927  826329 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1208 00:32:06.034932  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1208 00:32:06.034939  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.034944  826329 command_runner.go:130] > runtime_root = "/run/crun"
	I1208 00:32:06.034954  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.034958  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.034962  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.034972  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.034976  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.034981  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.034990  826329 command_runner.go:130] > allowed_annotations = [
	I1208 00:32:06.034999  826329 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1208 00:32:06.035002  826329 command_runner.go:130] > ]
	I1208 00:32:06.035007  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035011  826329 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 00:32:06.035016  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1208 00:32:06.035020  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.035024  826329 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 00:32:06.035034  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.035038  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.035042  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.035046  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.035050  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.035054  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.035145  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035184  826329 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 00:32:06.035191  826329 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 00:32:06.035197  826329 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 00:32:06.035205  826329 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 00:32:06.035222  826329 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1208 00:32:06.035233  826329 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1208 00:32:06.035249  826329 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1208 00:32:06.035255  826329 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 00:32:06.035265  826329 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 00:32:06.035274  826329 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 00:32:06.035280  826329 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 00:32:06.035291  826329 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 00:32:06.035294  826329 command_runner.go:130] > # Example:
	I1208 00:32:06.035299  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 00:32:06.035309  826329 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 00:32:06.035318  826329 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 00:32:06.035324  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 00:32:06.035413  826329 command_runner.go:130] > # cpuset = "0-1"
	I1208 00:32:06.035447  826329 command_runner.go:130] > # cpushares = "5"
	I1208 00:32:06.035460  826329 command_runner.go:130] > # cpuquota = "1000"
	I1208 00:32:06.035471  826329 command_runner.go:130] > # cpuperiod = "100000"
	I1208 00:32:06.035475  826329 command_runner.go:130] > # cpulimit = "35"
	I1208 00:32:06.035479  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.035483  826329 command_runner.go:130] > # The workload name is workload-type.
	I1208 00:32:06.035497  826329 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 00:32:06.035502  826329 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 00:32:06.035540  826329 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 00:32:06.035556  826329 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 00:32:06.035563  826329 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 00:32:06.035576  826329 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1208 00:32:06.035584  826329 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1208 00:32:06.035592  826329 command_runner.go:130] > # Default value is set to true
	I1208 00:32:06.035597  826329 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1208 00:32:06.035603  826329 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1208 00:32:06.035607  826329 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1208 00:32:06.035703  826329 command_runner.go:130] > # Default value is set to 'false'
	I1208 00:32:06.035729  826329 command_runner.go:130] > # disable_hostport_mapping = false
	I1208 00:32:06.035736  826329 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1208 00:32:06.035751  826329 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1208 00:32:06.035755  826329 command_runner.go:130] > # timezone = ""
	I1208 00:32:06.035762  826329 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 00:32:06.035769  826329 command_runner.go:130] > #
	I1208 00:32:06.035775  826329 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 00:32:06.035782  826329 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1208 00:32:06.035785  826329 command_runner.go:130] > [crio.image]
	I1208 00:32:06.035791  826329 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 00:32:06.035796  826329 command_runner.go:130] > # default_transport = "docker://"
	I1208 00:32:06.035802  826329 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 00:32:06.035813  826329 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035818  826329 command_runner.go:130] > # global_auth_file = ""
	I1208 00:32:06.035823  826329 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 00:32:06.035833  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035852  826329 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1208 00:32:06.035863  826329 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 00:32:06.035874  826329 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035950  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035964  826329 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 00:32:06.035972  826329 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 00:32:06.035989  826329 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 00:32:06.035998  826329 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 00:32:06.036009  826329 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 00:32:06.036013  826329 command_runner.go:130] > # pause_command = "/pause"
	I1208 00:32:06.036019  826329 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1208 00:32:06.036030  826329 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1208 00:32:06.036036  826329 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1208 00:32:06.036043  826329 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1208 00:32:06.036052  826329 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1208 00:32:06.036058  826329 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1208 00:32:06.036062  826329 command_runner.go:130] > # pinned_images = [
	I1208 00:32:06.036065  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036071  826329 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 00:32:06.036077  826329 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 00:32:06.036087  826329 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 00:32:06.036093  826329 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 00:32:06.036104  826329 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 00:32:06.036109  826329 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1208 00:32:06.036115  826329 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1208 00:32:06.036126  826329 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1208 00:32:06.036133  826329 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1208 00:32:06.036139  826329 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1208 00:32:06.036145  826329 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1208 00:32:06.036150  826329 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1208 00:32:06.036160  826329 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 00:32:06.036167  826329 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 00:32:06.036172  826329 command_runner.go:130] > # changing them here.
	I1208 00:32:06.036184  826329 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1208 00:32:06.036193  826329 command_runner.go:130] > # insecure_registries = [
	I1208 00:32:06.036196  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036300  826329 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 00:32:06.036317  826329 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 00:32:06.036326  826329 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 00:32:06.036331  826329 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 00:32:06.036335  826329 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 00:32:06.036342  826329 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1208 00:32:06.036353  826329 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1208 00:32:06.036358  826329 command_runner.go:130] > # auto_reload_registries = false
	I1208 00:32:06.036365  826329 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1208 00:32:06.036377  826329 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1208 00:32:06.036388  826329 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1208 00:32:06.036393  826329 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1208 00:32:06.036398  826329 command_runner.go:130] > # The mode of short name resolution.
	I1208 00:32:06.036404  826329 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1208 00:32:06.036418  826329 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1208 00:32:06.036424  826329 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1208 00:32:06.036433  826329 command_runner.go:130] > # short_name_mode = "enforcing"
	I1208 00:32:06.036439  826329 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1208 00:32:06.036446  826329 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1208 00:32:06.036457  826329 command_runner.go:130] > # oci_artifact_mount_support = true
	I1208 00:32:06.036463  826329 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 00:32:06.036466  826329 command_runner.go:130] > # CNI plugins.
	I1208 00:32:06.036469  826329 command_runner.go:130] > [crio.network]
	I1208 00:32:06.036476  826329 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 00:32:06.036481  826329 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 00:32:06.036485  826329 command_runner.go:130] > # cni_default_network = ""
	I1208 00:32:06.036496  826329 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 00:32:06.036501  826329 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 00:32:06.036506  826329 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 00:32:06.036515  826329 command_runner.go:130] > # plugin_dirs = [
	I1208 00:32:06.036642  826329 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 00:32:06.036668  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036675  826329 command_runner.go:130] > # List of included pod metrics.
	I1208 00:32:06.036679  826329 command_runner.go:130] > # included_pod_metrics = [
	I1208 00:32:06.036860  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036921  826329 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 00:32:06.036927  826329 command_runner.go:130] > [crio.metrics]
	I1208 00:32:06.036932  826329 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 00:32:06.036937  826329 command_runner.go:130] > # enable_metrics = false
	I1208 00:32:06.036942  826329 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 00:32:06.036953  826329 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 00:32:06.036960  826329 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 00:32:06.036994  826329 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 00:32:06.037043  826329 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 00:32:06.037079  826329 command_runner.go:130] > # metrics_collectors = [
	I1208 00:32:06.037090  826329 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 00:32:06.037155  826329 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1208 00:32:06.037178  826329 command_runner.go:130] > # 	"containers_oom_total",
	I1208 00:32:06.037336  826329 command_runner.go:130] > # 	"processes_defunct",
	I1208 00:32:06.037413  826329 command_runner.go:130] > # 	"operations_total",
	I1208 00:32:06.037662  826329 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 00:32:06.037734  826329 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 00:32:06.037748  826329 command_runner.go:130] > # 	"operations_errors_total",
	I1208 00:32:06.037753  826329 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 00:32:06.037772  826329 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 00:32:06.037792  826329 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 00:32:06.037922  826329 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 00:32:06.037987  826329 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 00:32:06.038011  826329 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 00:32:06.038021  826329 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1208 00:32:06.038045  826329 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1208 00:32:06.038193  826329 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1208 00:32:06.038255  826329 command_runner.go:130] > # ]
	I1208 00:32:06.038268  826329 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1208 00:32:06.038283  826329 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1208 00:32:06.038321  826329 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 00:32:06.038335  826329 command_runner.go:130] > # metrics_port = 9090
	I1208 00:32:06.038341  826329 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 00:32:06.038408  826329 command_runner.go:130] > # metrics_socket = ""
	I1208 00:32:06.038423  826329 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 00:32:06.038430  826329 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 00:32:06.038449  826329 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 00:32:06.038461  826329 command_runner.go:130] > # certificate on any modification event.
	I1208 00:32:06.038588  826329 command_runner.go:130] > # metrics_cert = ""
	I1208 00:32:06.038614  826329 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 00:32:06.038622  826329 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 00:32:06.038740  826329 command_runner.go:130] > # metrics_key = ""
	I1208 00:32:06.038809  826329 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 00:32:06.038823  826329 command_runner.go:130] > [crio.tracing]
	I1208 00:32:06.038829  826329 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 00:32:06.038833  826329 command_runner.go:130] > # enable_tracing = false
	I1208 00:32:06.038876  826329 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 00:32:06.038890  826329 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1208 00:32:06.038899  826329 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1208 00:32:06.038973  826329 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 00:32:06.038987  826329 command_runner.go:130] > # CRI-O NRI configuration.
	I1208 00:32:06.038992  826329 command_runner.go:130] > [crio.nri]
	I1208 00:32:06.039013  826329 command_runner.go:130] > # Globally enable or disable NRI.
	I1208 00:32:06.039024  826329 command_runner.go:130] > # enable_nri = true
	I1208 00:32:06.039029  826329 command_runner.go:130] > # NRI socket to listen on.
	I1208 00:32:06.039033  826329 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1208 00:32:06.039044  826329 command_runner.go:130] > # NRI plugin directory to use.
	I1208 00:32:06.039198  826329 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1208 00:32:06.039225  826329 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1208 00:32:06.039233  826329 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1208 00:32:06.039239  826329 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1208 00:32:06.039363  826329 command_runner.go:130] > # nri_disable_connections = false
	I1208 00:32:06.039381  826329 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1208 00:32:06.039476  826329 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1208 00:32:06.039494  826329 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1208 00:32:06.039499  826329 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1208 00:32:06.039504  826329 command_runner.go:130] > # NRI default validator configuration.
	I1208 00:32:06.039511  826329 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1208 00:32:06.039518  826329 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1208 00:32:06.039557  826329 command_runner.go:130] > # can be restricted/rejected:
	I1208 00:32:06.039568  826329 command_runner.go:130] > # - OCI hook injection
	I1208 00:32:06.039573  826329 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1208 00:32:06.039586  826329 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1208 00:32:06.039595  826329 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1208 00:32:06.039600  826329 command_runner.go:130] > # - adjustment of linux namespaces
	I1208 00:32:06.039606  826329 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1208 00:32:06.039685  826329 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1208 00:32:06.039812  826329 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1208 00:32:06.039825  826329 command_runner.go:130] > #
	I1208 00:32:06.039830  826329 command_runner.go:130] > # [crio.nri.default_validator]
	I1208 00:32:06.039911  826329 command_runner.go:130] > # nri_enable_default_validator = false
	I1208 00:32:06.039939  826329 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1208 00:32:06.039947  826329 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1208 00:32:06.039959  826329 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1208 00:32:06.039966  826329 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1208 00:32:06.039971  826329 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1208 00:32:06.039975  826329 command_runner.go:130] > # nri_validator_required_plugins = [
	I1208 00:32:06.039978  826329 command_runner.go:130] > # ]
	I1208 00:32:06.039984  826329 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1208 00:32:06.039994  826329 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 00:32:06.040003  826329 command_runner.go:130] > [crio.stats]
	I1208 00:32:06.040013  826329 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 00:32:06.040019  826329 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 00:32:06.040027  826329 command_runner.go:130] > # stats_collection_period = 0
	I1208 00:32:06.040033  826329 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1208 00:32:06.040043  826329 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1208 00:32:06.040047  826329 command_runner.go:130] > # collection_period = 0
	I1208 00:32:06.041802  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994368044Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1208 00:32:06.041819  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994407331Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1208 00:32:06.041829  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994434752Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1208 00:32:06.041836  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994457826Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1208 00:32:06.041847  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994536038Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:06.041867  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994955873Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1208 00:32:06.041895  826329 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 00:32:06.042057  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:06.042089  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:06.042117  826329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:32:06.042147  826329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:32:06.042284  826329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:32:06.042367  826329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:32:06.049993  826329 command_runner.go:130] > kubeadm
	I1208 00:32:06.050024  826329 command_runner.go:130] > kubectl
	I1208 00:32:06.050029  826329 command_runner.go:130] > kubelet
	I1208 00:32:06.051018  826329 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:32:06.051091  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:32:06.059413  826329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:32:06.073688  826329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:32:06.087599  826329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:32:06.100920  826329 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:32:06.104607  826329 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:32:06.104862  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:06.223310  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:06.506702  826329 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:32:06.506774  826329 certs.go:195] generating shared ca certs ...
	I1208 00:32:06.506805  826329 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:06.507033  826329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:32:06.507124  826329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:32:06.507152  826329 certs.go:257] generating profile certs ...
	I1208 00:32:06.507310  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:32:06.507422  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:32:06.507510  826329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:32:06.507537  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:32:06.507566  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:32:06.507605  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:32:06.507636  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:32:06.507680  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:32:06.507713  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:32:06.507755  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:32:06.507788  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:32:06.507873  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:32:06.507940  826329 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:32:06.507964  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:32:06.508024  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:32:06.508086  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:32:06.508156  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:32:06.508255  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:06.508336  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.508374  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.508417  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.509152  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:32:06.534629  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:32:06.554458  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:32:06.573968  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:32:06.590997  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:32:06.608508  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:32:06.625424  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:32:06.642336  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:32:06.660002  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:32:06.677652  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:32:06.695647  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:32:06.713354  826329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:32:06.725836  826329 ssh_runner.go:195] Run: openssl version
	I1208 00:32:06.731951  826329 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:32:06.732096  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.739312  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:32:06.746650  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750259  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750312  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750360  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.790520  826329 command_runner.go:130] > 51391683
	I1208 00:32:06.791045  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:32:06.798345  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.805645  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:32:06.813042  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816781  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816807  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816859  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.857524  826329 command_runner.go:130] > 3ec20f2e
	I1208 00:32:06.857994  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:32:06.865262  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.872409  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:32:06.879529  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883021  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883115  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883198  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.923843  826329 command_runner.go:130] > b5213941
	I1208 00:32:06.924322  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:32:06.931656  826329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935287  826329 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935325  826329 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:32:06.935332  826329 command_runner.go:130] > Device: 259,1	Inode: 1322385     Links: 1
	I1208 00:32:06.935354  826329 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:06.935369  826329 command_runner.go:130] > Access: 2025-12-08 00:27:59.408752113 +0000
	I1208 00:32:06.935374  826329 command_runner.go:130] > Modify: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935396  826329 command_runner.go:130] > Change: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935407  826329 command_runner.go:130] >  Birth: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935530  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:32:06.975831  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:06.976261  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:32:07.017790  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.017978  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:32:07.058488  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.058966  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:32:07.099457  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.099917  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:32:07.141471  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.141903  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:32:07.182188  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.182659  826329 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:07.182760  826329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:32:07.182825  826329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:32:07.209144  826329 cri.go:89] found id: ""
	I1208 00:32:07.209214  826329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:32:07.216134  826329 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:32:07.216154  826329 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:32:07.216162  826329 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:32:07.217097  826329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:32:07.217114  826329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:32:07.217178  826329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:32:07.224428  826329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:32:07.224856  826329 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.224961  826329 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "functional-525396" cluster setting kubeconfig missing "functional-525396" context setting]
	I1208 00:32:07.225241  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.225667  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.225818  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.226341  826329 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:32:07.226363  826329 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:32:07.226369  826329 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:32:07.226375  826329 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:32:07.226381  826329 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:32:07.226674  826329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:32:07.226772  826329 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:32:07.234310  826329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:32:07.234378  826329 kubeadm.go:602] duration metric: took 17.25872ms to restartPrimaryControlPlane
	I1208 00:32:07.234395  826329 kubeadm.go:403] duration metric: took 51.743543ms to StartCluster
	I1208 00:32:07.234412  826329 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.234484  826329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.235129  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.235358  826329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:32:07.235583  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:07.235658  826329 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:32:07.235740  826329 addons.go:70] Setting storage-provisioner=true in profile "functional-525396"
	I1208 00:32:07.235754  826329 addons.go:239] Setting addon storage-provisioner=true in "functional-525396"
	I1208 00:32:07.235778  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.236237  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.236576  826329 addons.go:70] Setting default-storageclass=true in profile "functional-525396"
	I1208 00:32:07.236601  826329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-525396"
	I1208 00:32:07.236875  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.242309  826329 out.go:179] * Verifying Kubernetes components...
	I1208 00:32:07.245184  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:07.271460  826329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:32:07.274400  826329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.274424  826329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:32:07.274492  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.276071  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.276241  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.276512  826329 addons.go:239] Setting addon default-storageclass=true in "functional-525396"
	I1208 00:32:07.276540  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.276944  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.314823  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.318477  826329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:07.318497  826329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:32:07.318558  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.352646  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.447557  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:07.488721  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.519084  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.257520  826329 node_ready.go:35] waiting up to 6m0s for node "functional-525396" to be "Ready" ...
	I1208 00:32:08.257618  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257654  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257688  826329 retry.go:31] will retry after 154.925821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257654  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.257704  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257722  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257734  826329 retry.go:31] will retry after 240.899479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257750  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.258076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.413579  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.477856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.477934  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.477962  826329 retry.go:31] will retry after 471.79599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.499019  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.559244  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.559341  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.559365  826329 retry.go:31] will retry after 419.613997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.758693  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.758772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.759084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.950598  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.979140  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.022887  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.022933  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.022979  826329 retry.go:31] will retry after 789.955074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083550  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.083656  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083684  826329 retry.go:31] will retry after 584.522236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.668477  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.723720  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.727856  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.727932  826329 retry.go:31] will retry after 996.136704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.757987  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.758082  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.813684  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:09.865943  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.869391  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.869422  826329 retry.go:31] will retry after 1.082403251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.257910  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:10.258329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.724942  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:10.758490  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.758896  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.786956  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:10.787023  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.787045  826329 retry.go:31] will retry after 1.653307887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.952461  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:11.017630  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:11.017682  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.017706  826329 retry.go:31] will retry after 1.450018323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.257721  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.258081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.757826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.757911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.258016  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.258092  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.258398  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:12.258449  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.440941  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:12.468519  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:12.523147  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.523192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.523212  826329 retry.go:31] will retry after 1.808868247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537050  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.537096  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537115  826329 retry.go:31] will retry after 1.005297336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.758616  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.758689  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.758985  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.257733  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.542714  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:13.607721  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:13.607772  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.607793  826329 retry.go:31] will retry after 2.59048957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.758025  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.758103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.257759  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.257837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.332402  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:14.393856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:14.393908  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.393927  826329 retry.go:31] will retry after 3.003957784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.758447  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.758779  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:14.758833  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:15.258432  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.258504  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.258873  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.758697  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.758770  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.198619  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:16.257994  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.258110  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.258333  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.261663  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:16.261706  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.261724  826329 retry.go:31] will retry after 3.921003057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.758355  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.758442  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.758740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.258595  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.258667  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.259014  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:17.259070  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:17.398537  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:17.459046  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:17.459087  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.459108  826329 retry.go:31] will retry after 6.352068949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.758636  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.758713  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.759027  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.757758  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.758113  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.258205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.757895  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:19.758338  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:20.183008  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:20.244376  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.244427  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.244447  826329 retry.go:31] will retry after 4.642616038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.258603  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.258946  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.757858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.758256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.757997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.758369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:22.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.757950  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.758369  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.257963  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.258271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.758124  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.758456  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.758513  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.811708  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:23.877239  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:23.877286  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:23.877305  826329 retry.go:31] will retry after 3.991513365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.257726  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.757814  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.757890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.887652  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:24.946807  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:24.946870  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.946894  826329 retry.go:31] will retry after 6.868435312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:25.258372  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.258452  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.258751  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.758655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.759159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.759287  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:26.257937  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.258011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.258320  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.757849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.758164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.258591  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.758609  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.869339  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:27.929619  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:27.929669  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:27.929689  826329 retry.go:31] will retry after 5.640751927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:28.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.258197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:28.258246  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.757900  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.257906  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.758680  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.758746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.759010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:30.759051  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:31.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.258120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.757934  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.815479  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:31.877679  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:31.877725  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:31.877744  826329 retry.go:31] will retry after 9.288265427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:32.258204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.258274  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.258579  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.758594  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.758959  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.257805  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.258256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:33.258316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:33.570705  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:33.628260  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:33.631756  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.631797  826329 retry.go:31] will retry after 7.380803559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.758003  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.758091  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.257826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.257908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.757933  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.757723  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:35.758156  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:36.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.257953  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.258310  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.758204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.758282  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.758636  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:37.758697  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:38.258444  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.258520  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.258964  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.758657  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.758988  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.258591  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.259009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.757689  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.757764  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.758032  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.257724  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.257806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.258168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:40.258225  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:40.757812  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.757892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.013670  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:41.072281  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.076192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.076223  826329 retry.go:31] will retry after 30.64284814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.166454  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:41.227404  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.227446  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.227466  826329 retry.go:31] will retry after 28.006603896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.258583  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.258655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.758793  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.758886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.759193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.257895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.258236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:42.258293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.758154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.758523  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.258386  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.258459  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.258782  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.758542  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.758614  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.758961  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.258683  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.258759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:44.259091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.757800  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.758206  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.258097  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.259164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.757651  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.757746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.758010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.257735  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.257815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.258117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.757885  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.757969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.758288  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.758347  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:47.258326  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.258400  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.258685  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.758684  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.758763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.759114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.257709  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.757752  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.758123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:49.258218  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.757765  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.758188  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.258204  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.258253  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.757903  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.757978  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.758301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.757965  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.758392  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.758279  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:54.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.257882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.757818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.757897  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.258277  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.757925  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.758403  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.258035  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.258362  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.258678  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.258763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.259088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.757900  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.757974  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.258215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:58.258269  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.758311  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.257792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.258100  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.757787  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.257846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.758031  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.758108  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.757962  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.257983  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.258055  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.258387  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.258456  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:02.757985  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.758059  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.258055  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.258125  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.258438  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.757882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.257989  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.258481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:04.758118  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.758201  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.758485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.258270  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.758448  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.758527  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.758934  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.257684  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.257772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.258049  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:06.758206  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.258726  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.258824  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.259215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.758011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.758271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.257849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:08.758228  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:09.234960  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:09.258398  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.258467  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.258726  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.299771  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:09.299811  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.299830  826329 retry.go:31] will retry after 22.917133282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.758561  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.758640  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.758995  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.258770  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.258868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.259197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.757838  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.758190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.257813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:11.258179  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:11.719678  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:11.758124  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.758203  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.758476  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.779600  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:11.783324  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:11.783357  826329 retry.go:31] will retry after 27.574784486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:12.257740  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.258104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.258219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:13.258272  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:13.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.757988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.257958  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.258037  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.258315  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:15.258360  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:15.757919  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.757879  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.257963  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.258036  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.258357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:17.258414  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.758272  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.758354  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.758668  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.258406  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.258487  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.258798  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.758471  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.758544  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.758891  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.258691  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.258772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.259134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.259190  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.757739  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.758088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.757870  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.757943  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.758290  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.757993  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.257852  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.258182  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:24.258220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.757878  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.758349  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.258345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.257811  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.258284  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.758040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.258252  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.258330  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.258588  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.758645  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.758735  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.759079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.758067  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.758108  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.757789  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.257875  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.257941  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.258210  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.757889  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.758308  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.257774  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.757714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.757784  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.758087  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.217681  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:32.258110  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.258497  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.272413  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:32.276021  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.276065  826329 retry.go:31] will retry after 31.830018043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.258151  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.258517  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.758451  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.258598  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.259035  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.758635  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.758714  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.759056  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.257714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.258111  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.758267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.257939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.757891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.258214  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.258289  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.258578  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:37.258623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.758354  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.758421  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.758674  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.258403  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.258497  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.258867  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.758486  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.758558  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.758906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.258694  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.258758  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.259030  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.259072  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.358376  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:39.412374  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416050  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416143  826329 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:33:39.758638  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.758720  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.759108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.757846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.757931  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.257809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.757977  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.758050  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.758393  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:42.258098  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.258182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.258488  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.758485  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.758557  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.758915  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.258576  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.258649  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.258992  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.757700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.757773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.758038  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.257757  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.258132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:44.258184  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.757809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.757999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.758336  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.258084  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.258468  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:46.258519  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.758126  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.758195  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.758462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.258480  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.258906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.758307  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.257842  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.758219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.758291  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:49.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.258184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.757922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.757790  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.257971  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.258282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:51.258346  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.757834  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.757908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.758182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.758452  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.258459  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.258900  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:53.258955  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.758700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.758780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.759083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.258123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.758170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:55.758182  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:56.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.757939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.758018  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.758340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.258337  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.258409  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.258677  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.758592  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:57.759063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:58.257674  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.257773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.757693  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.757771  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.758081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.258187  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.758199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.265698  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.265780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.266096  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:00.266143  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:00.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.757872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.258053  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.257892  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.258340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.758185  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.758273  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.758590  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:02.758643  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:03.258621  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.258702  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.757895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.758191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.106865  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:34:04.166273  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166323  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166403  826329 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:34:04.169502  826329 out.go:179] * Enabled addons: 
	I1208 00:34:04.171536  826329 addons.go:530] duration metric: took 1m56.935875389s for enable addons: enabled=[]
	I1208 00:34:04.258604  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.258682  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.259013  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.758662  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.758731  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.759011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:04.759062  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:05.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.758048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.758370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.257730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.258101  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.758131  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.758204  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.758570  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.258500  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.258586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.258950  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:07.259055  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:07.757997  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.758357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.257713  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.257788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.258063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:09.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:10.257921  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.258346  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.757735  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.757804  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.758062  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.757910  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:11.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:12.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.258391  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.757907  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.757979  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.258000  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.258079  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.757976  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.758046  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.758318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:14.258216  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:14.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.758229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.257940  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.258013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.258338  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:16.258395  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:16.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.758127  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.258701  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.258775  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.757896  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.758282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.257973  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.258048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.757762  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:18.758243  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.258352  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.758033  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.758409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.757890  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.757981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.758323  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:20.758384  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:21.257944  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.258010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.758322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.257850  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.257925  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.258270  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.758019  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.758365  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:22.758408  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:23.258071  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.258151  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.258491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.758281  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.758363  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.758707  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.258477  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.258561  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.759183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.759247  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:25.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.258000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.757806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.758120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.258248  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.757971  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.758380  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.258327  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.258401  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.258666  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:27.258716  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.758723  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.758798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.759103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.258027  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.258370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.758085  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.758508  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:29.758566  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:30.258264  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.258340  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.258608  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.758360  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.758437  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.758793  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.258627  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.258701  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.259047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.757815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.758076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.257780  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:32.258235  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:32.758097  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.758176  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.258283  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.258362  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.258621  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.758421  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.758509  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.758874  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.258773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.259148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:34.259210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.757843  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.757921  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.757995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.758360  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.257977  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.258049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.757866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.758233  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:37.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.257964  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.258296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.758129  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.758200  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.758490  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.258191  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.258269  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.758454  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.758534  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.758898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.758959  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:39.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.258627  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.258916  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.758708  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.759139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.257796  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.757783  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.758212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:41.258249  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:41.757913  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.758308  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.758011  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.758449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.258150  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.258227  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.258566  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:43.258632  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.758358  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.758430  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.758722  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.258546  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.259073  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.757871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.257935  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.258485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.758673  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.758756  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.759202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:46.257864  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.257946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.258291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.758013  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.258513  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.258598  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.259004  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.757974  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.758047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.257839  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.258125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:48.258175  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.757743  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.757816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.758138  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.257906  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.758137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.257875  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:50.258267  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.757934  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.758014  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.758361  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.258044  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.258119  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.258431  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.758821  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.758917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.759213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.757986  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.758060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.758375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:52.758428  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:53.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.758227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.757810  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.757886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.257839  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.257917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:55.258313  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:55.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.757796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.757854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.758141  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:57.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:57.758246  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.758647  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.258478  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.258560  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.258910  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.257905  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.258259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.758063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.758436  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:59.758494  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:00.270583  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.271106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.271544  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.758373  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.758448  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.758792  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.258597  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.259052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.257942  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.258019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.258319  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:02.258369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:02.758254  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.758335  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.758657  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.258485  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.258576  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.258926  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.757769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.258084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:04.758220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:05.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.257988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.258274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.257890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.258218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.758268  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:07.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.258264  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.258524  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.758503  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.758579  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.758911  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.258711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.258788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.259165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.758114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:09.258314  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:09.757867  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.257728  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.758154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.257828  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.257901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:11.758292  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.758010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.758331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.257734  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.258128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.757740  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.758156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.257879  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.257958  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:14.258372  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:14.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.258226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.757850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:16.758262  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:17.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.758126  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.258225  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.757982  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.758084  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:18.758496  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:19.258078  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.258148  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.258462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.758152  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.257773  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.257847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.258174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.757731  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.758079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:21.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:21.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.758255  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.258007  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.258298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.757958  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.758029  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.257782  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.757721  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.757792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:23.758157  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.257916  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.757747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.757838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.257741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.258153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:25.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.257792  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.257867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.258190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.757716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.757791  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.758047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.257747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.257826  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.258159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.757938  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.758339  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:27.758399  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:28.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.257817  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.258135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.758185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.257754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.757884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.758247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.258359  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:30.258416  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:30.758069  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.758447  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.257716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.757859  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.258342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.758262  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.758582  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:32.758623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.258445  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.258519  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.258864  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.758759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.759120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.757780  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.257854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.258302  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:35.757946  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.758342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.258034  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.258106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.758092  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.758170  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.758498  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.258371  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.258441  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.258740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.258804  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:37.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.758737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.759093  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.758009  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.758085  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.758354  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.258253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.758008  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.758083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.758427  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:39.758481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.258151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.757846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.758147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.258244  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.757920  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.757992  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.758263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.257833  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.258385  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.258459  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.758115  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.758189  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.758495  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.258231  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.258593  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.758356  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.758433  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.758767  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.258451  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.258526  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.258817  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.258887  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:44.758589  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.758661  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.758935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.257830  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.757933  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.257995  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.258070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.258330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.757844  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.758227  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.757930  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.758268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.757753  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.258251  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.758020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.758330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.258077  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.258159  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.258484  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.757837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.757936  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.758281  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.758053  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.758433  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.258161  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.258558  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.758318  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.758393  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.758646  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.758686  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.258483  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.258562  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.258917  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.758694  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.758792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.759186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.257832  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.258147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.757780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.758109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.257711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.258202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.257884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.257966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.758093  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.258229  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.258576  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.258619  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:58.758339  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.758413  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.758719  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.258566  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.258656  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.259028  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.757811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.758074  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.258301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.757822  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.757896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:00.758231  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.257745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.258119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.757848  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.758161  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.257756  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.758045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:02.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:03.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.757799  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.758980  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:36:04.257702  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.258057  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.758149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.257856  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:05.757874  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.757952  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.758274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.257951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.258331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.758228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.258156  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.258257  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.258603  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:07.258657  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:07.758639  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.758722  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.759070  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.257829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.757812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.257802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.257878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.758023  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:09.758454  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:10.258096  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.258168  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.757867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.257926  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.258015  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.758043  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.758118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:12.258271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:12.758147  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.758239  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.758564  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.258372  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.258650  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.758403  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.758476  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.758795  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.258438  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.258516  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.258865  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:14.258923  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:14.758558  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.758632  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.257698  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.257781  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.258012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.258318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.757852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.758196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:16.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:17.257965  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.258040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.757949  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.257775  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.257850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.757883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.258195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:19.757899  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.257800  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.258270  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:21.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.258048  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.258121  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.757988  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.758096  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.758420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:23.258320  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:23.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.758051  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.758371  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.258081  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.258509  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.758321  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.758398  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.758744  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.258469  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.258537  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.258876  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:25.258924  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:25.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.758727  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.759090  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.757942  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.758194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.257841  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.257927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.758332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:27.758386  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:28.257969  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.258045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.758107  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.757822  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.758078  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.257824  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.257913  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.258331  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:30.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.757915  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.257869  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.257937  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.257781  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.757940  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:32.758305  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.258196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.758193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.257750  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.757815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.757887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.257918  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.257997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.258317  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.258379  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:35.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.757819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.758135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.257783  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.258193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.758166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.258659  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.258733  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.259083  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:37.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.758024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.758345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.757932  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.758013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.758289  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.757952  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:39.758433  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.257793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.258042  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.257744  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:42.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.758448  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.757926  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.258047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.258465  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:44.757755  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.757827  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.257829  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.257930  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.758253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.757828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:46.758229  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.257985  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.258332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.757967  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.758296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.257872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.757878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:48.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:49.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.757898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.758139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:51.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.257880  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.258144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:51.258193  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:51.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.758200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.257870  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.258287  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.758014  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.758414  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:53.258138  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.258234  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.258594  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:53.258654  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:53.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.257895  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.257969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.258267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.758150  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:55.758195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:56.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.258194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:56.757733  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.758064  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.258687  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.258769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.259122  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.757909  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.757984  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:57.758349  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:58.257827  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.257904  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:58.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.758197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.257858  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.257940  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.758280  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:00.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.258083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.258409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:00.258457  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:00.758379  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.758466  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.758803  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.258644  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.258737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.259037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:02.758316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:03.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.258232  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:03.757961  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.758042  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.758415  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.258085  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.258154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.258494  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.758211  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.758302  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.758664  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:04.758720  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:05.258496  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.258572  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.258935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:05.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.757745  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.758009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.258149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.758260  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:07.258197  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.258266  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.258533  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:07.258574  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:07.758487  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.758564  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.758919  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.258731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.258806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.259157  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.757712  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.757783  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.758052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.758285  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:09.758354  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:10.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.257812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.258068  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:10.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.758172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.758165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:12.257867  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:12.258328  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:12.758227  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.758306  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.758623  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.258376  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.258454  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.258723  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.758551  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.758624  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.758979  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.757823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:14.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:15.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:15.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.758236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.257917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.258276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:16.758276  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:17.257980  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.258060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:17.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.758343  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.258231  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.757795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.757884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.758230  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:19.257736  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:19.258185  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:19.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.257828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.757722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.757789  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.758063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:21.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:21.258238  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:21.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.758000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.257738  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.257820  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.758012  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.758097  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.758430  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.257876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.258177  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.757901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:23.758293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:24.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:24.757779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.758189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.258103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:26.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.258263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:26.258318  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:26.757964  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.758030  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.758273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.258297  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.258369  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.258691  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.758719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.758793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.759134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.257821  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:28.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:29.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:29.757719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.757786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.758037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.258173  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.757761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.758153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:31.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.257787  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.258040  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:31.258078  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:31.757746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.757831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.257904  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.758153  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.758406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:33.257779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.258158  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:33.258205  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:33.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.757959  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.257990  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.258252  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.758130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.257853  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.258198  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:35.258259  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:35.757729  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.757808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.758125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.257840  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:37.258028  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.258098  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.258344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:37.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:37.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.758350  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.757892  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:39.758261  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:40.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.257976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.258247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:40.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.758250  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.757732  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.758046  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:42.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.258257  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:42.258317  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.758145  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.758527  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.258368  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.258629  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.758381  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.758456  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:44.258642  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.258728  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.259104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:44.259162  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:44.757666  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.757747  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.758033  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.258118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.258898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.758751  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.759069  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:46.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.258765  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.259139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:46.259195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:46.757764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.758163  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.258575  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.757955  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.758294  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.757898  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:48.758358  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:49.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.258126  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:49.757824  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.757899  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.258201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.757976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:51.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.257834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:51.258245  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:51.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.758176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.257907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.757998  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.758067  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.758400  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.257761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.257831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.258156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.758051  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:53.758091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:54.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:54.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.258107  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.757840  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.758276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:55.758329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:56.257991  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.258063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:56.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.758080  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.257909  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.258228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.757928  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.758314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:57.758371  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:58.257725  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:58.757817  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.758235  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.257927  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.258328  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.757914  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:00.257912  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.258367  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:00.258421  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:00.758080  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.758156  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.758491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.258328  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.258416  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.258737  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.758951  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.257691  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.257768  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.258118  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:02.758341  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:03.258024  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.258103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.258449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:03.758162  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.758236  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.758778  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.258999  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.757698  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:05.257820  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:05.258295  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:05.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.257819  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.757775  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:07.262532  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.262623  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.263011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:07.263063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:07.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:08.257967  826329 node_ready.go:38] duration metric: took 6m0.00040399s for node "functional-525396" to be "Ready" ...
	I1208 00:38:08.261085  826329 out.go:203] 
	W1208 00:38:08.263874  826329 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:38:08.263896  826329 out.go:285] * 
	W1208 00:38:08.266040  826329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:38:08.269117  826329 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:16 functional-525396 crio[5366]: time="2025-12-08T00:38:16.949491859Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f4eaf628-6de6-4466-aca7-624d7f3b6914 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053265535Z" level=info msg="Checking image status: minikube-local-cache-test:functional-525396" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053468212Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053513136Z" level=info msg="Image minikube-local-cache-test:functional-525396 not found" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053588993Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-525396 found" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082147724Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-525396" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082297355Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-525396 not found" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082342336Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-525396 found" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.10804915Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-525396" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.108198223Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-525396 not found" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.108238174Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-525396 found" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.09750356Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c94e4ad-c4a7-48fb-b79c-98d473974851 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429399874Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429582251Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429631703Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987727152Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987886792Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987947191Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.023951286Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.024080657Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.024115136Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.071842765Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.072030098Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.072079904Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.606102194Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=92ba9907-b69c-4125-b030-0d1648257605 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:22.205807    9403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:22.206609    9403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:22.208267    9403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:22.208772    9403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:22.210305    9403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:38:22 up  5:20,  0 user,  load average: 0.31, 0.26, 0.67
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:38:20 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:20 functional-525396 kubelet[9239]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:20 functional-525396 kubelet[9239]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:20 functional-525396 kubelet[9239]: E1208 00:38:20.077562    9239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:20 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:20 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:20 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 08 00:38:20 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:20 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:20 functional-525396 kubelet[9299]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:20 functional-525396 kubelet[9299]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:20 functional-525396 kubelet[9299]: E1208 00:38:20.819604    9299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:20 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:20 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:21 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 08 00:38:21 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:21 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:21 functional-525396 kubelet[9320]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:21 functional-525396 kubelet[9320]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:21 functional-525396 kubelet[9320]: E1208 00:38:21.591388    9320 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:21 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:21 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:22 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 08 00:38:22 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:22 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (400.27439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-525396 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-525396 get pods: exit status 1 (106.094557ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-525396 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (353.019684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 logs -n 25: (1.046186035s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-714395 image ls --format short --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format yaml --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format json --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format table --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh     │ functional-714395 ssh pgrep buildkitd                                                                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image   │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                            │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls                                                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete  │ -p functional-714395                                                                                                                              │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start   │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start   │ -p functional-525396 --alsologtostderr -v=8                                                                                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:latest                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add minikube-local-cache-test:functional-525396                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache delete minikube-local-cache-test:functional-525396                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl images                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ cache   │ functional-525396 cache reload                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ kubectl │ functional-525396 kubectl -- --context functional-525396 get pods                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:32:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:32:02.748489  826329 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:32:02.748673  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748687  826329 out.go:374] Setting ErrFile to fd 2...
	I1208 00:32:02.748692  826329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:32:02.748975  826329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:32:02.749379  826329 out.go:368] Setting JSON to false
	I1208 00:32:02.750240  826329 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18855,"bootTime":1765135068,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:32:02.750321  826329 start.go:143] virtualization:  
	I1208 00:32:02.755521  826329 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:32:02.759227  826329 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:32:02.759498  826329 notify.go:221] Checking for updates...
	I1208 00:32:02.765171  826329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:32:02.768668  826329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:02.771686  826329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:32:02.774728  826329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:32:02.777727  826329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:32:02.781794  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:02.781971  826329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:32:02.823053  826329 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:32:02.823186  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.879429  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.869702269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.879546  826329 docker.go:319] overlay module found
	I1208 00:32:02.884410  826329 out.go:179] * Using the docker driver based on existing profile
	I1208 00:32:02.887311  826329 start.go:309] selected driver: docker
	I1208 00:32:02.887330  826329 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.887447  826329 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:32:02.887565  826329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:32:02.942385  826329 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:32:02.932846048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:32:02.942810  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:02.942902  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:02.942960  826329 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:02.948301  826329 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:32:02.951106  826329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:32:02.954049  826329 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:32:02.956917  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:02.956968  826329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:32:02.956999  826329 cache.go:65] Caching tarball of preloaded images
	I1208 00:32:02.957004  826329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:32:02.957092  826329 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:32:02.957103  826329 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:32:02.957210  826329 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:32:02.976499  826329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:32:02.976524  826329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:32:02.976543  826329 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:32:02.976579  826329 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:32:02.976652  826329 start.go:364] duration metric: took 48.116µs to acquireMachinesLock for "functional-525396"
	I1208 00:32:02.976674  826329 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:32:02.976683  826329 fix.go:54] fixHost starting: 
	I1208 00:32:02.976940  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:02.996203  826329 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:32:02.996234  826329 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:32:02.999434  826329 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:32:02.999477  826329 machine.go:94] provisionDockerMachine start ...
	I1208 00:32:02.999559  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.021375  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.021746  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.021762  826329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:32:03.174523  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.174550  826329 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:32:03.174616  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.192743  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.193067  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.193084  826329 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:32:03.356577  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:32:03.356704  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.375055  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.375394  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.375419  826329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:32:03.529767  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:32:03.529793  826329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:32:03.529822  826329 ubuntu.go:190] setting up certificates
	I1208 00:32:03.529839  826329 provision.go:84] configureAuth start
	I1208 00:32:03.529901  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:03.552219  826329 provision.go:143] copyHostCerts
	I1208 00:32:03.552258  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552298  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:32:03.552310  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:32:03.552383  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:32:03.552464  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552480  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:32:03.552484  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:32:03.552511  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:32:03.552550  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552566  826329 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:32:03.552570  826329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:32:03.552592  826329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:32:03.552642  826329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:32:03.707027  826329 provision.go:177] copyRemoteCerts
	I1208 00:32:03.707105  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:32:03.707150  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.724035  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:03.830514  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:32:03.830586  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 00:32:03.848126  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:32:03.848238  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:32:03.865293  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:32:03.865368  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:32:03.882781  826329 provision.go:87] duration metric: took 352.917637ms to configureAuth
	I1208 00:32:03.882808  826329 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:32:03.883086  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:03.883204  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:03.900405  826329 main.go:143] libmachine: Using SSH client type: native
	I1208 00:32:03.900722  826329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:32:03.900745  826329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:32:04.247102  826329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:32:04.247132  826329 machine.go:97] duration metric: took 1.247646186s to provisionDockerMachine
	I1208 00:32:04.247143  826329 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:32:04.247156  826329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:32:04.247233  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:32:04.247291  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.269420  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.374672  826329 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:32:04.377926  826329 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:32:04.377948  826329 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:32:04.377953  826329 command_runner.go:130] > VERSION_ID="12"
	I1208 00:32:04.377958  826329 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:32:04.377964  826329 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:32:04.377968  826329 command_runner.go:130] > ID=debian
	I1208 00:32:04.377973  826329 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:32:04.377998  826329 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:32:04.378009  826329 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:32:04.378363  826329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:32:04.378386  826329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:32:04.378397  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:32:04.378453  826329 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:32:04.378535  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:32:04.378546  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 00:32:04.378621  826329 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:32:04.378628  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> /etc/test/nested/copy/791807/hosts
	I1208 00:32:04.378672  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:32:04.386632  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:04.404202  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:32:04.421545  826329 start.go:296] duration metric: took 174.385446ms for postStartSetup
	I1208 00:32:04.421649  826329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:32:04.421695  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.439941  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.543929  826329 command_runner.go:130] > 13%
	I1208 00:32:04.544005  826329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:32:04.548692  826329 command_runner.go:130] > 169G
	I1208 00:32:04.548719  826329 fix.go:56] duration metric: took 1.572034198s for fixHost
	I1208 00:32:04.548730  826329 start.go:83] releasing machines lock for "functional-525396", held for 1.572067364s
	I1208 00:32:04.548856  826329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:32:04.565574  826329 ssh_runner.go:195] Run: cat /version.json
	I1208 00:32:04.565638  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.565923  826329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:32:04.565984  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:04.584847  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.600519  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:04.771794  826329 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:32:04.774495  826329 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:32:04.774657  826329 ssh_runner.go:195] Run: systemctl --version
	I1208 00:32:04.780874  826329 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:32:04.780917  826329 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:32:04.781367  826329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:32:04.818112  826329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:32:04.822491  826329 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:32:04.822532  826329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:32:04.822595  826329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:32:04.830492  826329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:32:04.830518  826329 start.go:496] detecting cgroup driver to use...
	I1208 00:32:04.830579  826329 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:32:04.830661  826329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:32:04.846467  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:32:04.859999  826329 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:32:04.860093  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:32:04.876040  826329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:32:04.889316  826329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:32:04.999380  826329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:32:05.135529  826329 docker.go:234] disabling docker service ...
	I1208 00:32:05.135652  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:32:05.150887  826329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:32:05.164082  826329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:32:05.274195  826329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:32:05.386139  826329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:32:05.399321  826329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:32:05.411741  826329 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 00:32:05.412925  826329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:32:05.413007  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.421375  826329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:32:05.421462  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.430145  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.438751  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.447666  826329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:32:05.455572  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.464290  826329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.472537  826329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:05.481189  826329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:32:05.487727  826329 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:32:05.488614  826329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:32:05.496261  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:05.603146  826329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:32:05.769023  826329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:32:05.769169  826329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:32:05.773391  826329 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 00:32:05.773452  826329 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:32:05.773473  826329 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1208 00:32:05.773494  826329 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:05.773524  826329 command_runner.go:130] > Access: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773553  826329 command_runner.go:130] > Modify: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773581  826329 command_runner.go:130] > Change: 2025-12-08 00:32:05.710977948 +0000
	I1208 00:32:05.773598  826329 command_runner.go:130] >  Birth: -
	I1208 00:32:05.774292  826329 start.go:564] Will wait 60s for crictl version
	I1208 00:32:05.774387  826329 ssh_runner.go:195] Run: which crictl
	I1208 00:32:05.778688  826329 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:32:05.779547  826329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:32:05.803509  826329 command_runner.go:130] > Version:  0.1.0
	I1208 00:32:05.803790  826329 command_runner.go:130] > RuntimeName:  cri-o
	I1208 00:32:05.804036  826329 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1208 00:32:05.804294  826329 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:32:05.806608  826329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:32:05.806739  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.840244  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.840321  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.840340  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.840361  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.840391  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.840415  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.840434  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.840452  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.840471  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.840498  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.840519  826329 command_runner.go:130] >      static
	I1208 00:32:05.840536  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.840553  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.840567  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.840593  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.840612  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.840629  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.840647  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.840664  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.840690  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.841800  826329 ssh_runner.go:195] Run: crio --version
	I1208 00:32:05.872333  826329 command_runner.go:130] > crio version 1.34.3
	I1208 00:32:05.872357  826329 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1208 00:32:05.872369  826329 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1208 00:32:05.872376  826329 command_runner.go:130] >    GitTreeState:   dirty
	I1208 00:32:05.872381  826329 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1208 00:32:05.872385  826329 command_runner.go:130] >    GoVersion:      go1.24.6
	I1208 00:32:05.872389  826329 command_runner.go:130] >    Compiler:       gc
	I1208 00:32:05.872395  826329 command_runner.go:130] >    Platform:       linux/arm64
	I1208 00:32:05.872399  826329 command_runner.go:130] >    Linkmode:       static
	I1208 00:32:05.872408  826329 command_runner.go:130] >    BuildTags:
	I1208 00:32:05.872412  826329 command_runner.go:130] >      static
	I1208 00:32:05.872422  826329 command_runner.go:130] >      netgo
	I1208 00:32:05.872437  826329 command_runner.go:130] >      osusergo
	I1208 00:32:05.872444  826329 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1208 00:32:05.872448  826329 command_runner.go:130] >      seccomp
	I1208 00:32:05.872451  826329 command_runner.go:130] >      apparmor
	I1208 00:32:05.872457  826329 command_runner.go:130] >      selinux
	I1208 00:32:05.872463  826329 command_runner.go:130] >    LDFlags:          unknown
	I1208 00:32:05.872467  826329 command_runner.go:130] >    SeccompEnabled:   true
	I1208 00:32:05.872480  826329 command_runner.go:130] >    AppArmorEnabled:  false
	I1208 00:32:05.877414  826329 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:32:05.880269  826329 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:32:05.896780  826329 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:32:05.900764  826329 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:32:05.900873  826329 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:32:05.900985  826329 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:32:05.901051  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.935654  826329 command_runner.go:130] > {
	I1208 00:32:05.935679  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.935684  826329 command_runner.go:130] >     {
	I1208 00:32:05.935694  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.935699  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935705  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.935708  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935713  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935724  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.935736  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.935743  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935756  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.935763  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935768  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935772  826329 command_runner.go:130] >     },
	I1208 00:32:05.935775  826329 command_runner.go:130] >     {
	I1208 00:32:05.935781  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.935787  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935793  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.935796  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935800  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935810  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.935821  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.935825  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935829  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.935836  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.935845  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935853  826329 command_runner.go:130] >     },
	I1208 00:32:05.935857  826329 command_runner.go:130] >     {
	I1208 00:32:05.935864  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.935870  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935876  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.935879  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935885  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935894  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.935905  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.935908  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935912  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.935917  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.935923  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.935927  826329 command_runner.go:130] >     },
	I1208 00:32:05.935932  826329 command_runner.go:130] >     {
	I1208 00:32:05.935938  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.935946  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.935956  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.935962  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935967  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.935975  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.935986  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.935990  826329 command_runner.go:130] >       ],
	I1208 00:32:05.935994  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.936001  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936006  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936011  826329 command_runner.go:130] >       },
	I1208 00:32:05.936021  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936028  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936031  826329 command_runner.go:130] >     },
	I1208 00:32:05.936034  826329 command_runner.go:130] >     {
	I1208 00:32:05.936041  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.936048  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936053  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.936057  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936063  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936072  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.936083  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.936087  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936091  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.936095  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936101  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936105  826329 command_runner.go:130] >       },
	I1208 00:32:05.936110  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936116  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936119  826329 command_runner.go:130] >     },
	I1208 00:32:05.936122  826329 command_runner.go:130] >     {
	I1208 00:32:05.936129  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.936136  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936143  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.936152  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936160  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936169  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.936179  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.936184  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936189  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.936195  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936199  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936203  826329 command_runner.go:130] >       },
	I1208 00:32:05.936207  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936215  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936219  826329 command_runner.go:130] >     },
	I1208 00:32:05.936222  826329 command_runner.go:130] >     {
	I1208 00:32:05.936228  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.936235  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936240  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.936244  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936255  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936263  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.936271  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.936277  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936282  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.936288  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936292  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936295  826329 command_runner.go:130] >     },
	I1208 00:32:05.936298  826329 command_runner.go:130] >     {
	I1208 00:32:05.936306  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.936313  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936318  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.936322  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936326  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936336  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.936362  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.936372  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936377  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.936387  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936391  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.936395  826329 command_runner.go:130] >       },
	I1208 00:32:05.936406  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936410  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.936414  826329 command_runner.go:130] >     },
	I1208 00:32:05.936417  826329 command_runner.go:130] >     {
	I1208 00:32:05.936424  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.936432  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.936437  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.936441  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936445  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.936455  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.936465  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.936469  826329 command_runner.go:130] >       ],
	I1208 00:32:05.936473  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.936483  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.936487  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.936490  826329 command_runner.go:130] >       },
	I1208 00:32:05.936500  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.936504  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.936507  826329 command_runner.go:130] >     }
	I1208 00:32:05.936510  826329 command_runner.go:130] >   ]
	I1208 00:32:05.936513  826329 command_runner.go:130] > }
	I1208 00:32:05.936690  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.936705  826329 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:32:05.936757  826329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:32:05.965491  826329 command_runner.go:130] > {
	I1208 00:32:05.965510  826329 command_runner.go:130] >   "images":  [
	I1208 00:32:05.965515  826329 command_runner.go:130] >     {
	I1208 00:32:05.965525  826329 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:32:05.965542  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965549  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:32:05.965553  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965557  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965584  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1208 00:32:05.965593  826329 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1208 00:32:05.965596  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965600  826329 command_runner.go:130] >       "size":  "111333938",
	I1208 00:32:05.965604  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965614  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965618  826329 command_runner.go:130] >     },
	I1208 00:32:05.965620  826329 command_runner.go:130] >     {
	I1208 00:32:05.965627  826329 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:32:05.965630  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965635  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:32:05.965639  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965642  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965650  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1208 00:32:05.965659  826329 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:32:05.965662  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965666  826329 command_runner.go:130] >       "size":  "29037500",
	I1208 00:32:05.965669  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965675  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965679  826329 command_runner.go:130] >     },
	I1208 00:32:05.965682  826329 command_runner.go:130] >     {
	I1208 00:32:05.965689  826329 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:32:05.965692  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965700  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:32:05.965704  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965708  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965715  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1208 00:32:05.965723  826329 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1208 00:32:05.965726  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965733  826329 command_runner.go:130] >       "size":  "74491780",
	I1208 00:32:05.965738  826329 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:32:05.965741  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965744  826329 command_runner.go:130] >     },
	I1208 00:32:05.965747  826329 command_runner.go:130] >     {
	I1208 00:32:05.965754  826329 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:32:05.965758  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965763  826329 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:32:05.965768  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965772  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965779  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1208 00:32:05.965786  826329 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1208 00:32:05.965789  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965793  826329 command_runner.go:130] >       "size":  "60857170",
	I1208 00:32:05.965796  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965800  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965803  826329 command_runner.go:130] >       },
	I1208 00:32:05.965811  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965815  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965818  826329 command_runner.go:130] >     },
	I1208 00:32:05.965821  826329 command_runner.go:130] >     {
	I1208 00:32:05.965827  826329 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:32:05.965831  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965841  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:32:05.965844  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965848  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965859  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1208 00:32:05.965867  826329 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1208 00:32:05.965870  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965874  826329 command_runner.go:130] >       "size":  "84949999",
	I1208 00:32:05.965877  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965881  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965884  826329 command_runner.go:130] >       },
	I1208 00:32:05.965891  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965895  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965898  826329 command_runner.go:130] >     },
	I1208 00:32:05.965901  826329 command_runner.go:130] >     {
	I1208 00:32:05.965907  826329 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:32:05.965911  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965917  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:32:05.965920  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965924  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.965932  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1208 00:32:05.965944  826329 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1208 00:32:05.965947  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965951  826329 command_runner.go:130] >       "size":  "72170325",
	I1208 00:32:05.965954  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.965958  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.965961  826329 command_runner.go:130] >       },
	I1208 00:32:05.965964  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.965968  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.965971  826329 command_runner.go:130] >     },
	I1208 00:32:05.965974  826329 command_runner.go:130] >     {
	I1208 00:32:05.965980  826329 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:32:05.965984  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.965989  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:32:05.965992  826329 command_runner.go:130] >       ],
	I1208 00:32:05.965995  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966003  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1208 00:32:05.966013  826329 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:32:05.966016  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966020  826329 command_runner.go:130] >       "size":  "74106775",
	I1208 00:32:05.966023  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966027  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966030  826329 command_runner.go:130] >     },
	I1208 00:32:05.966033  826329 command_runner.go:130] >     {
	I1208 00:32:05.966042  826329 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:32:05.966046  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966051  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:32:05.966054  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966058  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966066  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1208 00:32:05.966082  826329 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1208 00:32:05.966086  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966090  826329 command_runner.go:130] >       "size":  "49822549",
	I1208 00:32:05.966094  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966097  826329 command_runner.go:130] >         "value":  "0"
	I1208 00:32:05.966100  826329 command_runner.go:130] >       },
	I1208 00:32:05.966104  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966109  826329 command_runner.go:130] >       "pinned":  false
	I1208 00:32:05.966112  826329 command_runner.go:130] >     },
	I1208 00:32:05.966117  826329 command_runner.go:130] >     {
	I1208 00:32:05.966124  826329 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:32:05.966127  826329 command_runner.go:130] >       "repoTags":  [
	I1208 00:32:05.966131  826329 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:32:05.966136  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966140  826329 command_runner.go:130] >       "repoDigests":  [
	I1208 00:32:05.966149  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1208 00:32:05.966156  826329 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1208 00:32:05.966160  826329 command_runner.go:130] >       ],
	I1208 00:32:05.966163  826329 command_runner.go:130] >       "size":  "519884",
	I1208 00:32:05.966167  826329 command_runner.go:130] >       "uid":  {
	I1208 00:32:05.966171  826329 command_runner.go:130] >         "value":  "65535"
	I1208 00:32:05.966173  826329 command_runner.go:130] >       },
	I1208 00:32:05.966177  826329 command_runner.go:130] >       "username":  "",
	I1208 00:32:05.966180  826329 command_runner.go:130] >       "pinned":  true
	I1208 00:32:05.966183  826329 command_runner.go:130] >     }
	I1208 00:32:05.966186  826329 command_runner.go:130] >   ]
	I1208 00:32:05.966189  826329 command_runner.go:130] > }
	I1208 00:32:05.968541  826329 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:32:05.968564  826329 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:32:05.968572  826329 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:32:05.968676  826329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:32:05.968759  826329 ssh_runner.go:195] Run: crio config
	I1208 00:32:06.017314  826329 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 00:32:06.017338  826329 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 00:32:06.017347  826329 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 00:32:06.017350  826329 command_runner.go:130] > #
	I1208 00:32:06.017357  826329 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 00:32:06.017363  826329 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 00:32:06.017370  826329 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 00:32:06.017378  826329 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 00:32:06.017384  826329 command_runner.go:130] > # reload'.
	I1208 00:32:06.017391  826329 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 00:32:06.017404  826329 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 00:32:06.017411  826329 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 00:32:06.017417  826329 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 00:32:06.017423  826329 command_runner.go:130] > [crio]
	I1208 00:32:06.017429  826329 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 00:32:06.017434  826329 command_runner.go:130] > # containers images, in this directory.
	I1208 00:32:06.017704  826329 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 00:32:06.017722  826329 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 00:32:06.017729  826329 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1208 00:32:06.017738  826329 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1208 00:32:06.017898  826329 command_runner.go:130] > # imagestore = ""
	I1208 00:32:06.017914  826329 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 00:32:06.017922  826329 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 00:32:06.018164  826329 command_runner.go:130] > # storage_driver = "overlay"
	I1208 00:32:06.018180  826329 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 00:32:06.018187  826329 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 00:32:06.018278  826329 command_runner.go:130] > # storage_option = [
	I1208 00:32:06.018455  826329 command_runner.go:130] > # ]
	I1208 00:32:06.018487  826329 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 00:32:06.018500  826329 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 00:32:06.018675  826329 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 00:32:06.018694  826329 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 00:32:06.018706  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 00:32:06.018719  826329 command_runner.go:130] > # always happen on a node reboot
	I1208 00:32:06.018990  826329 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 00:32:06.019024  826329 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 00:32:06.019035  826329 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 00:32:06.019041  826329 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 00:32:06.019224  826329 command_runner.go:130] > # version_file_persist = ""
	I1208 00:32:06.019243  826329 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 00:32:06.019258  826329 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 00:32:06.019484  826329 command_runner.go:130] > # internal_wipe = true
	I1208 00:32:06.019500  826329 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1208 00:32:06.019507  826329 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1208 00:32:06.019754  826329 command_runner.go:130] > # internal_repair = true
	I1208 00:32:06.019769  826329 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 00:32:06.019785  826329 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 00:32:06.019793  826329 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 00:32:06.020120  826329 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 00:32:06.020138  826329 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 00:32:06.020143  826329 command_runner.go:130] > [crio.api]
	I1208 00:32:06.020148  826329 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 00:32:06.020346  826329 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 00:32:06.020366  826329 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 00:32:06.020581  826329 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 00:32:06.020605  826329 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 00:32:06.020611  826329 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 00:32:06.020863  826329 command_runner.go:130] > # stream_port = "0"
	I1208 00:32:06.020878  826329 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 00:32:06.021158  826329 command_runner.go:130] > # stream_enable_tls = false
	I1208 00:32:06.021176  826329 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 00:32:06.021352  826329 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 00:32:06.021367  826329 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 00:32:06.021380  826329 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021617  826329 command_runner.go:130] > # stream_tls_cert = ""
	I1208 00:32:06.021634  826329 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 00:32:06.021641  826329 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1208 00:32:06.021794  826329 command_runner.go:130] > # stream_tls_key = ""
	I1208 00:32:06.021808  826329 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 00:32:06.021824  826329 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 00:32:06.021840  826329 command_runner.go:130] > # automatically pick up the changes.
	I1208 00:32:06.022038  826329 command_runner.go:130] > # stream_tls_ca = ""
	I1208 00:32:06.022075  826329 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022282  826329 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 00:32:06.022297  826329 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1208 00:32:06.022560  826329 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 00:32:06.022581  826329 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 00:32:06.022589  826329 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 00:32:06.022596  826329 command_runner.go:130] > [crio.runtime]
	I1208 00:32:06.022603  826329 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 00:32:06.022613  826329 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 00:32:06.022618  826329 command_runner.go:130] > # "nofile=1024:2048"
	I1208 00:32:06.022627  826329 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 00:32:06.022736  826329 command_runner.go:130] > # default_ulimits = [
	I1208 00:32:06.022966  826329 command_runner.go:130] > # ]
	I1208 00:32:06.022982  826329 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 00:32:06.023192  826329 command_runner.go:130] > # no_pivot = false
	I1208 00:32:06.023203  826329 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 00:32:06.023210  826329 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 00:32:06.023435  826329 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 00:32:06.023449  826329 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 00:32:06.023455  826329 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 00:32:06.023463  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023655  826329 command_runner.go:130] > # conmon = ""
	I1208 00:32:06.023668  826329 command_runner.go:130] > # Cgroup setting for conmon
	I1208 00:32:06.023697  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 00:32:06.023812  826329 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 00:32:06.023826  826329 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 00:32:06.023831  826329 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 00:32:06.023839  826329 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 00:32:06.023982  826329 command_runner.go:130] > # conmon_env = [
	I1208 00:32:06.024123  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024147  826329 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 00:32:06.024153  826329 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 00:32:06.024161  826329 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 00:32:06.024313  826329 command_runner.go:130] > # default_env = [
	I1208 00:32:06.024407  826329 command_runner.go:130] > # ]
	I1208 00:32:06.024424  826329 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 00:32:06.024439  826329 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1208 00:32:06.024689  826329 command_runner.go:130] > # selinux = false
	I1208 00:32:06.024713  826329 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 00:32:06.024722  826329 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1208 00:32:06.024727  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.024963  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.024977  826329 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1208 00:32:06.024983  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025171  826329 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1208 00:32:06.025185  826329 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 00:32:06.025199  826329 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 00:32:06.025214  826329 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 00:32:06.025222  826329 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 00:32:06.025227  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.025459  826329 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 00:32:06.025474  826329 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 00:32:06.025479  826329 command_runner.go:130] > # the cgroup blockio controller.
	I1208 00:32:06.025701  826329 command_runner.go:130] > # blockio_config_file = ""
	I1208 00:32:06.025716  826329 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1208 00:32:06.025721  826329 command_runner.go:130] > # blockio parameters.
	I1208 00:32:06.025998  826329 command_runner.go:130] > # blockio_reload = false
	I1208 00:32:06.026018  826329 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 00:32:06.026025  826329 command_runner.go:130] > # irqbalance daemon.
	I1208 00:32:06.026221  826329 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 00:32:06.026241  826329 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1208 00:32:06.026249  826329 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1208 00:32:06.026257  826329 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1208 00:32:06.026494  826329 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1208 00:32:06.026510  826329 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 00:32:06.026517  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.026722  826329 command_runner.go:130] > # rdt_config_file = ""
	I1208 00:32:06.026753  826329 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 00:32:06.026902  826329 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 00:32:06.026919  826329 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 00:32:06.027125  826329 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 00:32:06.027138  826329 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 00:32:06.027163  826329 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 00:32:06.027177  826329 command_runner.go:130] > # will be added.
	I1208 00:32:06.027277  826329 command_runner.go:130] > # default_capabilities = [
	I1208 00:32:06.027581  826329 command_runner.go:130] > # 	"CHOWN",
	I1208 00:32:06.027682  826329 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 00:32:06.027912  826329 command_runner.go:130] > # 	"FSETID",
	I1208 00:32:06.028073  826329 command_runner.go:130] > # 	"FOWNER",
	I1208 00:32:06.028166  826329 command_runner.go:130] > # 	"SETGID",
	I1208 00:32:06.028351  826329 command_runner.go:130] > # 	"SETUID",
	I1208 00:32:06.028526  826329 command_runner.go:130] > # 	"SETPCAP",
	I1208 00:32:06.028680  826329 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 00:32:06.028802  826329 command_runner.go:130] > # 	"KILL",
	I1208 00:32:06.028996  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029019  826329 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 00:32:06.029028  826329 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 00:32:06.029301  826329 command_runner.go:130] > # add_inheritable_capabilities = false
	I1208 00:32:06.029326  826329 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 00:32:06.029333  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029338  826329 command_runner.go:130] > default_sysctls = [
	I1208 00:32:06.029464  826329 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1208 00:32:06.029477  826329 command_runner.go:130] > ]
	I1208 00:32:06.029483  826329 command_runner.go:130] > # List of devices on the host that a
	I1208 00:32:06.029491  826329 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 00:32:06.029495  826329 command_runner.go:130] > # allowed_devices = [
	I1208 00:32:06.029499  826329 command_runner.go:130] > # 	"/dev/fuse",
	I1208 00:32:06.029507  826329 command_runner.go:130] > # 	"/dev/net/tun",
	I1208 00:32:06.029726  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029756  826329 command_runner.go:130] > # List of additional devices. specified as
	I1208 00:32:06.029769  826329 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 00:32:06.029775  826329 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 00:32:06.029782  826329 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 00:32:06.029898  826329 command_runner.go:130] > # additional_devices = [
	I1208 00:32:06.029911  826329 command_runner.go:130] > # ]
	I1208 00:32:06.029918  826329 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 00:32:06.029922  826329 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 00:32:06.030014  826329 command_runner.go:130] > # 	"/etc/cdi",
	I1208 00:32:06.030033  826329 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 00:32:06.030037  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030045  826329 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 00:32:06.030051  826329 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 00:32:06.030058  826329 command_runner.go:130] > # Defaults to false.
	I1208 00:32:06.030179  826329 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 00:32:06.030194  826329 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 00:32:06.030201  826329 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 00:32:06.030206  826329 command_runner.go:130] > # hooks_dir = [
	I1208 00:32:06.030462  826329 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 00:32:06.030539  826329 command_runner.go:130] > # ]
	I1208 00:32:06.030554  826329 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 00:32:06.030561  826329 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 00:32:06.030592  826329 command_runner.go:130] > # its default mounts from the following two files:
	I1208 00:32:06.030598  826329 command_runner.go:130] > #
	I1208 00:32:06.030608  826329 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 00:32:06.030631  826329 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 00:32:06.030642  826329 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 00:32:06.030646  826329 command_runner.go:130] > #
	I1208 00:32:06.030658  826329 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 00:32:06.030668  826329 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 00:32:06.030675  826329 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 00:32:06.030680  826329 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 00:32:06.030684  826329 command_runner.go:130] > #
	I1208 00:32:06.030688  826329 command_runner.go:130] > # default_mounts_file = ""
	I1208 00:32:06.030697  826329 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 00:32:06.030710  826329 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 00:32:06.030795  826329 command_runner.go:130] > # pids_limit = -1
	I1208 00:32:06.030811  826329 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 00:32:06.030858  826329 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 00:32:06.030867  826329 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 00:32:06.030881  826329 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 00:32:06.030886  826329 command_runner.go:130] > # log_size_max = -1
	I1208 00:32:06.030903  826329 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1208 00:32:06.031086  826329 command_runner.go:130] > # log_to_journald = false
	I1208 00:32:06.031102  826329 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 00:32:06.031167  826329 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 00:32:06.031181  826329 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 00:32:06.031241  826329 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 00:32:06.031258  826329 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 00:32:06.031327  826329 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 00:32:06.031335  826329 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 00:32:06.031339  826329 command_runner.go:130] > # read_only = false
	I1208 00:32:06.031345  826329 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 00:32:06.031377  826329 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 00:32:06.031383  826329 command_runner.go:130] > # live configuration reload.
	I1208 00:32:06.031388  826329 command_runner.go:130] > # log_level = "info"
	I1208 00:32:06.031397  826329 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 00:32:06.031408  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.031412  826329 command_runner.go:130] > # log_filter = ""
	I1208 00:32:06.031419  826329 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031430  826329 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 00:32:06.031434  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031452  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031456  826329 command_runner.go:130] > # uid_mappings = ""
	I1208 00:32:06.031462  826329 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 00:32:06.031468  826329 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 00:32:06.031472  826329 command_runner.go:130] > # separated by comma.
	I1208 00:32:06.031482  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031553  826329 command_runner.go:130] > # gid_mappings = ""
	I1208 00:32:06.031569  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 00:32:06.031632  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031648  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031656  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.031742  826329 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 00:32:06.031759  826329 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 00:32:06.031785  826329 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 00:32:06.031798  826329 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 00:32:06.031807  826329 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1208 00:32:06.032017  826329 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 00:32:06.032056  826329 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 00:32:06.032071  826329 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 00:32:06.032077  826329 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 00:32:06.032099  826329 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 00:32:06.032106  826329 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 00:32:06.032112  826329 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 00:32:06.032205  826329 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 00:32:06.032267  826329 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 00:32:06.032278  826329 command_runner.go:130] > # drop_infra_ctr = true
	I1208 00:32:06.032285  826329 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 00:32:06.032292  826329 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 00:32:06.032307  826329 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 00:32:06.032340  826329 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 00:32:06.032356  826329 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1208 00:32:06.032371  826329 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1208 00:32:06.032378  826329 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1208 00:32:06.032384  826329 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1208 00:32:06.032394  826329 command_runner.go:130] > # shared_cpuset = ""
	I1208 00:32:06.032400  826329 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 00:32:06.032411  826329 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 00:32:06.032448  826329 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 00:32:06.032463  826329 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 00:32:06.032467  826329 command_runner.go:130] > # pinns_path = ""
	I1208 00:32:06.032473  826329 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1208 00:32:06.032479  826329 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1208 00:32:06.032487  826329 command_runner.go:130] > # enable_criu_support = true
	I1208 00:32:06.032493  826329 command_runner.go:130] > # Enable/disable the generation of the container,
	I1208 00:32:06.032500  826329 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1208 00:32:06.032732  826329 command_runner.go:130] > # enable_pod_events = false
	I1208 00:32:06.032748  826329 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 00:32:06.032827  826329 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1208 00:32:06.032846  826329 command_runner.go:130] > # default_runtime = "crun"
	I1208 00:32:06.032871  826329 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 00:32:06.032889  826329 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 00:32:06.032901  826329 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1208 00:32:06.032911  826329 command_runner.go:130] > # creation as a file is not desired either.
	I1208 00:32:06.032919  826329 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 00:32:06.032929  826329 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 00:32:06.032938  826329 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 00:32:06.032974  826329 command_runner.go:130] > # ]
	I1208 00:32:06.033041  826329 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 00:32:06.033057  826329 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 00:32:06.033064  826329 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1208 00:32:06.033070  826329 command_runner.go:130] > # Each entry in the table should follow the format:
	I1208 00:32:06.033073  826329 command_runner.go:130] > #
	I1208 00:32:06.033106  826329 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1208 00:32:06.033112  826329 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1208 00:32:06.033117  826329 command_runner.go:130] > # runtime_type = "oci"
	I1208 00:32:06.033192  826329 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1208 00:32:06.033209  826329 command_runner.go:130] > # inherit_default_runtime = false
	I1208 00:32:06.033214  826329 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1208 00:32:06.033219  826329 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1208 00:32:06.033225  826329 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1208 00:32:06.033228  826329 command_runner.go:130] > # monitor_env = []
	I1208 00:32:06.033233  826329 command_runner.go:130] > # privileged_without_host_devices = false
	I1208 00:32:06.033237  826329 command_runner.go:130] > # allowed_annotations = []
	I1208 00:32:06.033263  826329 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1208 00:32:06.033276  826329 command_runner.go:130] > # no_sync_log = false
	I1208 00:32:06.033282  826329 command_runner.go:130] > # default_annotations = {}
	I1208 00:32:06.033376  826329 command_runner.go:130] > # stream_websockets = false
	I1208 00:32:06.033384  826329 command_runner.go:130] > # seccomp_profile = ""
	I1208 00:32:06.033433  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.033444  826329 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1208 00:32:06.033456  826329 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1208 00:32:06.033467  826329 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 00:32:06.033474  826329 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 00:32:06.033477  826329 command_runner.go:130] > #   in $PATH.
	I1208 00:32:06.033483  826329 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1208 00:32:06.033489  826329 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 00:32:06.033495  826329 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1208 00:32:06.033504  826329 command_runner.go:130] > #   state.
	I1208 00:32:06.033518  826329 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 00:32:06.033528  826329 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 00:32:06.033535  826329 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1208 00:32:06.033547  826329 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1208 00:32:06.033552  826329 command_runner.go:130] > #   the values from the default runtime on load time.
	I1208 00:32:06.033558  826329 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 00:32:06.033563  826329 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 00:32:06.033604  826329 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 00:32:06.033610  826329 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 00:32:06.033615  826329 command_runner.go:130] > #   The currently recognized values are:
	I1208 00:32:06.033697  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 00:32:06.033736  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 00:32:06.033745  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 00:32:06.033760  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 00:32:06.033770  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 00:32:06.033787  826329 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 00:32:06.033799  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1208 00:32:06.033811  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1208 00:32:06.033818  826329 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 00:32:06.033824  826329 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1208 00:32:06.033832  826329 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1208 00:32:06.033842  826329 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1208 00:32:06.033851  826329 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1208 00:32:06.033863  826329 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1208 00:32:06.033869  826329 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1208 00:32:06.033883  826329 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1208 00:32:06.033892  826329 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1208 00:32:06.033896  826329 command_runner.go:130] > #   deprecated option "conmon".
	I1208 00:32:06.033903  826329 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1208 00:32:06.033908  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1208 00:32:06.033916  826329 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1208 00:32:06.033925  826329 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 00:32:06.033933  826329 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1208 00:32:06.033944  826329 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1208 00:32:06.033955  826329 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1208 00:32:06.033959  826329 command_runner.go:130] > #   conmon-rs by using:
	I1208 00:32:06.033976  826329 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1208 00:32:06.033990  826329 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1208 00:32:06.033998  826329 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1208 00:32:06.034005  826329 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1208 00:32:06.034012  826329 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1208 00:32:06.034036  826329 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1208 00:32:06.034044  826329 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1208 00:32:06.034064  826329 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1208 00:32:06.034074  826329 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1208 00:32:06.034087  826329 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1208 00:32:06.034557  826329 command_runner.go:130] > #   when a machine crash happens.
	I1208 00:32:06.034567  826329 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1208 00:32:06.034582  826329 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1208 00:32:06.034589  826329 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1208 00:32:06.034594  826329 command_runner.go:130] > #   seccomp profile for the runtime.
	I1208 00:32:06.034680  826329 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1208 00:32:06.034713  826329 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1208 00:32:06.034720  826329 command_runner.go:130] > #
	I1208 00:32:06.034732  826329 command_runner.go:130] > # Using the seccomp notifier feature:
	I1208 00:32:06.034735  826329 command_runner.go:130] > #
	I1208 00:32:06.034742  826329 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1208 00:32:06.034749  826329 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1208 00:32:06.034762  826329 command_runner.go:130] > #
	I1208 00:32:06.034769  826329 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1208 00:32:06.034785  826329 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1208 00:32:06.034788  826329 command_runner.go:130] > #
	I1208 00:32:06.034795  826329 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1208 00:32:06.034799  826329 command_runner.go:130] > # feature.
	I1208 00:32:06.034802  826329 command_runner.go:130] > #
	I1208 00:32:06.034808  826329 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1208 00:32:06.034819  826329 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1208 00:32:06.034825  826329 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1208 00:32:06.034837  826329 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1208 00:32:06.034858  826329 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1208 00:32:06.034861  826329 command_runner.go:130] > #
	I1208 00:32:06.034867  826329 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1208 00:32:06.034878  826329 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1208 00:32:06.034881  826329 command_runner.go:130] > #
	I1208 00:32:06.034887  826329 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1208 00:32:06.034897  826329 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1208 00:32:06.034900  826329 command_runner.go:130] > #
	I1208 00:32:06.034906  826329 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1208 00:32:06.034916  826329 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1208 00:32:06.034920  826329 command_runner.go:130] > # limitation.
	I1208 00:32:06.034927  826329 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1208 00:32:06.034932  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1208 00:32:06.034939  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.034944  826329 command_runner.go:130] > runtime_root = "/run/crun"
	I1208 00:32:06.034954  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.034958  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.034962  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.034972  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.034976  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.034981  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.034990  826329 command_runner.go:130] > allowed_annotations = [
	I1208 00:32:06.034999  826329 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1208 00:32:06.035002  826329 command_runner.go:130] > ]
	I1208 00:32:06.035007  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035011  826329 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 00:32:06.035016  826329 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1208 00:32:06.035020  826329 command_runner.go:130] > runtime_type = ""
	I1208 00:32:06.035024  826329 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 00:32:06.035034  826329 command_runner.go:130] > inherit_default_runtime = false
	I1208 00:32:06.035038  826329 command_runner.go:130] > runtime_config_path = ""
	I1208 00:32:06.035042  826329 command_runner.go:130] > container_min_memory = ""
	I1208 00:32:06.035046  826329 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1208 00:32:06.035050  826329 command_runner.go:130] > monitor_cgroup = "pod"
	I1208 00:32:06.035054  826329 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 00:32:06.035145  826329 command_runner.go:130] > privileged_without_host_devices = false
	I1208 00:32:06.035184  826329 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 00:32:06.035191  826329 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 00:32:06.035197  826329 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 00:32:06.035205  826329 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 00:32:06.035222  826329 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1208 00:32:06.035233  826329 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1208 00:32:06.035249  826329 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1208 00:32:06.035255  826329 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 00:32:06.035265  826329 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 00:32:06.035274  826329 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 00:32:06.035280  826329 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 00:32:06.035291  826329 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 00:32:06.035294  826329 command_runner.go:130] > # Example:
	I1208 00:32:06.035299  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 00:32:06.035309  826329 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 00:32:06.035318  826329 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 00:32:06.035324  826329 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 00:32:06.035413  826329 command_runner.go:130] > # cpuset = "0-1"
	I1208 00:32:06.035447  826329 command_runner.go:130] > # cpushares = "5"
	I1208 00:32:06.035460  826329 command_runner.go:130] > # cpuquota = "1000"
	I1208 00:32:06.035471  826329 command_runner.go:130] > # cpuperiod = "100000"
	I1208 00:32:06.035475  826329 command_runner.go:130] > # cpulimit = "35"
	I1208 00:32:06.035479  826329 command_runner.go:130] > # Where:
	I1208 00:32:06.035483  826329 command_runner.go:130] > # The workload name is workload-type.
	I1208 00:32:06.035497  826329 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 00:32:06.035502  826329 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 00:32:06.035540  826329 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 00:32:06.035556  826329 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 00:32:06.035563  826329 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 00:32:06.035576  826329 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1208 00:32:06.035584  826329 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1208 00:32:06.035592  826329 command_runner.go:130] > # Default value is set to true
	I1208 00:32:06.035597  826329 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1208 00:32:06.035603  826329 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1208 00:32:06.035607  826329 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1208 00:32:06.035703  826329 command_runner.go:130] > # Default value is set to 'false'
	I1208 00:32:06.035729  826329 command_runner.go:130] > # disable_hostport_mapping = false
	I1208 00:32:06.035736  826329 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1208 00:32:06.035751  826329 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1208 00:32:06.035755  826329 command_runner.go:130] > # timezone = ""
	I1208 00:32:06.035762  826329 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 00:32:06.035769  826329 command_runner.go:130] > #
	I1208 00:32:06.035775  826329 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 00:32:06.035782  826329 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1208 00:32:06.035785  826329 command_runner.go:130] > [crio.image]
	I1208 00:32:06.035791  826329 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 00:32:06.035796  826329 command_runner.go:130] > # default_transport = "docker://"
	I1208 00:32:06.035802  826329 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 00:32:06.035813  826329 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035818  826329 command_runner.go:130] > # global_auth_file = ""
	I1208 00:32:06.035823  826329 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 00:32:06.035833  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035852  826329 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1208 00:32:06.035863  826329 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 00:32:06.035874  826329 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 00:32:06.035950  826329 command_runner.go:130] > # This option supports live configuration reload.
	I1208 00:32:06.035964  826329 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 00:32:06.035972  826329 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 00:32:06.035989  826329 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 00:32:06.035998  826329 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 00:32:06.036009  826329 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 00:32:06.036013  826329 command_runner.go:130] > # pause_command = "/pause"
	I1208 00:32:06.036019  826329 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1208 00:32:06.036030  826329 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1208 00:32:06.036036  826329 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1208 00:32:06.036043  826329 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1208 00:32:06.036052  826329 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1208 00:32:06.036058  826329 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1208 00:32:06.036062  826329 command_runner.go:130] > # pinned_images = [
	I1208 00:32:06.036065  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036071  826329 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 00:32:06.036077  826329 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 00:32:06.036087  826329 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 00:32:06.036093  826329 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 00:32:06.036104  826329 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 00:32:06.036109  826329 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1208 00:32:06.036115  826329 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1208 00:32:06.036126  826329 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1208 00:32:06.036133  826329 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1208 00:32:06.036139  826329 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1208 00:32:06.036145  826329 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1208 00:32:06.036150  826329 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1208 00:32:06.036160  826329 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 00:32:06.036167  826329 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 00:32:06.036172  826329 command_runner.go:130] > # changing them here.
	I1208 00:32:06.036184  826329 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1208 00:32:06.036193  826329 command_runner.go:130] > # insecure_registries = [
	I1208 00:32:06.036196  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036300  826329 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 00:32:06.036317  826329 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 00:32:06.036326  826329 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 00:32:06.036331  826329 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 00:32:06.036335  826329 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 00:32:06.036342  826329 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1208 00:32:06.036353  826329 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1208 00:32:06.036358  826329 command_runner.go:130] > # auto_reload_registries = false
	I1208 00:32:06.036365  826329 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1208 00:32:06.036377  826329 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1208 00:32:06.036388  826329 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1208 00:32:06.036393  826329 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1208 00:32:06.036398  826329 command_runner.go:130] > # The mode of short name resolution.
	I1208 00:32:06.036404  826329 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1208 00:32:06.036418  826329 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1208 00:32:06.036424  826329 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1208 00:32:06.036433  826329 command_runner.go:130] > # short_name_mode = "enforcing"
	I1208 00:32:06.036439  826329 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1208 00:32:06.036446  826329 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1208 00:32:06.036457  826329 command_runner.go:130] > # oci_artifact_mount_support = true
	I1208 00:32:06.036463  826329 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 00:32:06.036466  826329 command_runner.go:130] > # CNI plugins.
	I1208 00:32:06.036469  826329 command_runner.go:130] > [crio.network]
	I1208 00:32:06.036476  826329 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 00:32:06.036481  826329 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 00:32:06.036485  826329 command_runner.go:130] > # cni_default_network = ""
	I1208 00:32:06.036496  826329 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 00:32:06.036501  826329 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 00:32:06.036506  826329 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 00:32:06.036515  826329 command_runner.go:130] > # plugin_dirs = [
	I1208 00:32:06.036642  826329 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 00:32:06.036668  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036675  826329 command_runner.go:130] > # List of included pod metrics.
	I1208 00:32:06.036679  826329 command_runner.go:130] > # included_pod_metrics = [
	I1208 00:32:06.036860  826329 command_runner.go:130] > # ]
	I1208 00:32:06.036921  826329 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 00:32:06.036927  826329 command_runner.go:130] > [crio.metrics]
	I1208 00:32:06.036932  826329 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 00:32:06.036937  826329 command_runner.go:130] > # enable_metrics = false
	I1208 00:32:06.036942  826329 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 00:32:06.036953  826329 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 00:32:06.036960  826329 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 00:32:06.036994  826329 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 00:32:06.037043  826329 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 00:32:06.037079  826329 command_runner.go:130] > # metrics_collectors = [
	I1208 00:32:06.037090  826329 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 00:32:06.037155  826329 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1208 00:32:06.037178  826329 command_runner.go:130] > # 	"containers_oom_total",
	I1208 00:32:06.037336  826329 command_runner.go:130] > # 	"processes_defunct",
	I1208 00:32:06.037413  826329 command_runner.go:130] > # 	"operations_total",
	I1208 00:32:06.037662  826329 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 00:32:06.037734  826329 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 00:32:06.037748  826329 command_runner.go:130] > # 	"operations_errors_total",
	I1208 00:32:06.037753  826329 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 00:32:06.037772  826329 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 00:32:06.037792  826329 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 00:32:06.037922  826329 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 00:32:06.037987  826329 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 00:32:06.038011  826329 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 00:32:06.038021  826329 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1208 00:32:06.038045  826329 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1208 00:32:06.038193  826329 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1208 00:32:06.038255  826329 command_runner.go:130] > # ]
	I1208 00:32:06.038268  826329 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1208 00:32:06.038283  826329 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1208 00:32:06.038321  826329 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 00:32:06.038335  826329 command_runner.go:130] > # metrics_port = 9090
	I1208 00:32:06.038341  826329 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 00:32:06.038408  826329 command_runner.go:130] > # metrics_socket = ""
	I1208 00:32:06.038423  826329 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 00:32:06.038430  826329 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 00:32:06.038449  826329 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 00:32:06.038461  826329 command_runner.go:130] > # certificate on any modification event.
	I1208 00:32:06.038588  826329 command_runner.go:130] > # metrics_cert = ""
	I1208 00:32:06.038614  826329 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 00:32:06.038622  826329 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 00:32:06.038740  826329 command_runner.go:130] > # metrics_key = ""
	I1208 00:32:06.038809  826329 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 00:32:06.038823  826329 command_runner.go:130] > [crio.tracing]
	I1208 00:32:06.038829  826329 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 00:32:06.038833  826329 command_runner.go:130] > # enable_tracing = false
	I1208 00:32:06.038876  826329 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 00:32:06.038890  826329 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1208 00:32:06.038899  826329 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1208 00:32:06.038973  826329 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 00:32:06.038987  826329 command_runner.go:130] > # CRI-O NRI configuration.
	I1208 00:32:06.038992  826329 command_runner.go:130] > [crio.nri]
	I1208 00:32:06.039013  826329 command_runner.go:130] > # Globally enable or disable NRI.
	I1208 00:32:06.039024  826329 command_runner.go:130] > # enable_nri = true
	I1208 00:32:06.039029  826329 command_runner.go:130] > # NRI socket to listen on.
	I1208 00:32:06.039033  826329 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1208 00:32:06.039044  826329 command_runner.go:130] > # NRI plugin directory to use.
	I1208 00:32:06.039198  826329 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1208 00:32:06.039225  826329 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1208 00:32:06.039233  826329 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1208 00:32:06.039239  826329 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1208 00:32:06.039363  826329 command_runner.go:130] > # nri_disable_connections = false
	I1208 00:32:06.039381  826329 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1208 00:32:06.039476  826329 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1208 00:32:06.039494  826329 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1208 00:32:06.039499  826329 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1208 00:32:06.039504  826329 command_runner.go:130] > # NRI default validator configuration.
	I1208 00:32:06.039511  826329 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1208 00:32:06.039518  826329 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1208 00:32:06.039557  826329 command_runner.go:130] > # can be restricted/rejected:
	I1208 00:32:06.039568  826329 command_runner.go:130] > # - OCI hook injection
	I1208 00:32:06.039573  826329 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1208 00:32:06.039586  826329 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1208 00:32:06.039595  826329 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1208 00:32:06.039600  826329 command_runner.go:130] > # - adjustment of linux namespaces
	I1208 00:32:06.039606  826329 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1208 00:32:06.039685  826329 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1208 00:32:06.039812  826329 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1208 00:32:06.039825  826329 command_runner.go:130] > #
	I1208 00:32:06.039830  826329 command_runner.go:130] > # [crio.nri.default_validator]
	I1208 00:32:06.039911  826329 command_runner.go:130] > # nri_enable_default_validator = false
	I1208 00:32:06.039939  826329 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1208 00:32:06.039947  826329 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1208 00:32:06.039959  826329 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1208 00:32:06.039966  826329 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1208 00:32:06.039971  826329 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1208 00:32:06.039975  826329 command_runner.go:130] > # nri_validator_required_plugins = [
	I1208 00:32:06.039978  826329 command_runner.go:130] > # ]
	I1208 00:32:06.039984  826329 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1208 00:32:06.039994  826329 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 00:32:06.040003  826329 command_runner.go:130] > [crio.stats]
	I1208 00:32:06.040013  826329 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 00:32:06.040019  826329 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 00:32:06.040027  826329 command_runner.go:130] > # stats_collection_period = 0
	I1208 00:32:06.040033  826329 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1208 00:32:06.040043  826329 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1208 00:32:06.040047  826329 command_runner.go:130] > # collection_period = 0
	I1208 00:32:06.041802  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994368044Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1208 00:32:06.041819  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994407331Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1208 00:32:06.041829  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994434752Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1208 00:32:06.041836  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994457826Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1208 00:32:06.041847  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994536038Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:32:06.041867  826329 command_runner.go:130] ! time="2025-12-08T00:32:05.994955873Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1208 00:32:06.041895  826329 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 00:32:06.042057  826329 cni.go:84] Creating CNI manager for ""
	I1208 00:32:06.042089  826329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:32:06.042117  826329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:32:06.042147  826329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:32:06.042284  826329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:32:06.042367  826329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:32:06.049993  826329 command_runner.go:130] > kubeadm
	I1208 00:32:06.050024  826329 command_runner.go:130] > kubectl
	I1208 00:32:06.050029  826329 command_runner.go:130] > kubelet
	I1208 00:32:06.051018  826329 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:32:06.051091  826329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:32:06.059413  826329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:32:06.073688  826329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:32:06.087599  826329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 00:32:06.100920  826329 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:32:06.104607  826329 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:32:06.104862  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:06.223310  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:06.506702  826329 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:32:06.506774  826329 certs.go:195] generating shared ca certs ...
	I1208 00:32:06.506805  826329 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:06.507033  826329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:32:06.507124  826329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:32:06.507152  826329 certs.go:257] generating profile certs ...
	I1208 00:32:06.507310  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:32:06.507422  826329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:32:06.507510  826329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:32:06.507537  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:32:06.507566  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:32:06.507605  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:32:06.507636  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:32:06.507680  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:32:06.507713  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:32:06.507755  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:32:06.507788  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:32:06.507873  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:32:06.507940  826329 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:32:06.507964  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:32:06.508024  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:32:06.508086  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:32:06.508156  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:32:06.508255  826329 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:32:06.508336  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.508374  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.508417  826329 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.509152  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:32:06.534629  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:32:06.554458  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:32:06.573968  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:32:06.590997  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:32:06.608508  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:32:06.625424  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:32:06.642336  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:32:06.660002  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:32:06.677652  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:32:06.695647  826329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:32:06.713354  826329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:32:06.725836  826329 ssh_runner.go:195] Run: openssl version
	I1208 00:32:06.731951  826329 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:32:06.732096  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.739312  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:32:06.746650  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750259  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750312  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.750360  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:32:06.790520  826329 command_runner.go:130] > 51391683
	I1208 00:32:06.791045  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:32:06.798345  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.805645  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:32:06.813042  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816781  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816807  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.816859  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:32:06.857524  826329 command_runner.go:130] > 3ec20f2e
	I1208 00:32:06.857994  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:32:06.865262  826329 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.872409  826329 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:32:06.879529  826329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883021  826329 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883115  826329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.883198  826329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:32:06.923843  826329 command_runner.go:130] > b5213941
	I1208 00:32:06.924322  826329 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:32:06.931656  826329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935287  826329 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:32:06.935325  826329 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:32:06.935332  826329 command_runner.go:130] > Device: 259,1	Inode: 1322385     Links: 1
	I1208 00:32:06.935354  826329 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:32:06.935369  826329 command_runner.go:130] > Access: 2025-12-08 00:27:59.408752113 +0000
	I1208 00:32:06.935374  826329 command_runner.go:130] > Modify: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935396  826329 command_runner.go:130] > Change: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935407  826329 command_runner.go:130] >  Birth: 2025-12-08 00:23:53.882517337 +0000
	I1208 00:32:06.935530  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:32:06.975831  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:06.976261  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:32:07.017790  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.017978  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:32:07.058488  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.058966  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:32:07.099457  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.099917  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:32:07.141471  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.141903  826329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:32:07.182188  826329 command_runner.go:130] > Certificate will not expire
	I1208 00:32:07.182659  826329 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:32:07.182760  826329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:32:07.182825  826329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:32:07.209144  826329 cri.go:89] found id: ""
	I1208 00:32:07.209214  826329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:32:07.216134  826329 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:32:07.216154  826329 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:32:07.216162  826329 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:32:07.217097  826329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:32:07.217114  826329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:32:07.217178  826329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:32:07.224428  826329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:32:07.224856  826329 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-525396" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.224961  826329 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "functional-525396" cluster setting kubeconfig missing "functional-525396" context setting]
	I1208 00:32:07.225241  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.225667  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.225818  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.226341  826329 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:32:07.226363  826329 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:32:07.226369  826329 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:32:07.226375  826329 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:32:07.226381  826329 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:32:07.226674  826329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:32:07.226772  826329 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:32:07.234310  826329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:32:07.234378  826329 kubeadm.go:602] duration metric: took 17.25872ms to restartPrimaryControlPlane
	I1208 00:32:07.234395  826329 kubeadm.go:403] duration metric: took 51.743543ms to StartCluster
	I1208 00:32:07.234412  826329 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.234484  826329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.235129  826329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:32:07.235358  826329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 00:32:07.235583  826329 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:32:07.235658  826329 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:32:07.235740  826329 addons.go:70] Setting storage-provisioner=true in profile "functional-525396"
	I1208 00:32:07.235754  826329 addons.go:239] Setting addon storage-provisioner=true in "functional-525396"
	I1208 00:32:07.235778  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.236237  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.236576  826329 addons.go:70] Setting default-storageclass=true in profile "functional-525396"
	I1208 00:32:07.236601  826329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-525396"
	I1208 00:32:07.236875  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.242309  826329 out.go:179] * Verifying Kubernetes components...
	I1208 00:32:07.245184  826329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:32:07.271460  826329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:32:07.274400  826329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.274424  826329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:32:07.274492  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.276071  826329 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:32:07.276241  826329 kapi.go:59] client config for functional-525396: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:32:07.276512  826329 addons.go:239] Setting addon default-storageclass=true in "functional-525396"
	I1208 00:32:07.276540  826329 host.go:66] Checking if "functional-525396" exists ...
	I1208 00:32:07.276944  826329 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:32:07.314823  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.318477  826329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:07.318497  826329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:32:07.318558  826329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:32:07.352646  826329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:32:07.447557  826329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:32:07.488721  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:07.519084  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.257520  826329 node_ready.go:35] waiting up to 6m0s for node "functional-525396" to be "Ready" ...
	I1208 00:32:08.257618  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257654  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257688  826329 retry.go:31] will retry after 154.925821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257654  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.257704  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.257722  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257734  826329 retry.go:31] will retry after 240.899479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.257750  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.258076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.413579  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.477856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.477934  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.477962  826329 retry.go:31] will retry after 471.79599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.499019  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:08.559244  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:08.559341  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.559365  826329 retry.go:31] will retry after 419.613997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:08.758693  826329 type.go:168] "Request Body" body=""
	I1208 00:32:08.758772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.759084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.950598  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:08.979140  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.022887  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.022933  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.022979  826329 retry.go:31] will retry after 789.955074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083550  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.083656  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.083684  826329 retry.go:31] will retry after 584.522236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.668477  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:09.723720  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.727856  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.727932  826329 retry.go:31] will retry after 996.136704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.757987  826329 type.go:168] "Request Body" body=""
	I1208 00:32:09.758082  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.813684  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:09.865943  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:09.869391  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:09.869422  826329 retry.go:31] will retry after 1.082403251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.257910  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:10.258329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.724942  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:10.758490  826329 type.go:168] "Request Body" body=""
	I1208 00:32:10.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.758896  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.786956  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:10.787023  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.787045  826329 retry.go:31] will retry after 1.653307887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:10.952461  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:11.017630  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:11.017682  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.017706  826329 retry.go:31] will retry after 1.450018323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:11.257721  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.258081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.757826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:11.757911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.258016  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.258092  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.258398  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:12.258449  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.440941  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:12.468519  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:12.523147  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.523192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.523212  826329 retry.go:31] will retry after 1.808868247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537050  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:12.537096  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.537115  826329 retry.go:31] will retry after 1.005297336s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:12.758616  826329 type.go:168] "Request Body" body=""
	I1208 00:32:12.758689  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.758985  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.257733  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.542714  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:13.607721  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:13.607772  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.607793  826329 retry.go:31] will retry after 2.59048957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:13.758025  826329 type.go:168] "Request Body" body=""
	I1208 00:32:13.758103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.257759  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.257837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.332402  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:14.393856  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:14.393908  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.393927  826329 retry.go:31] will retry after 3.003957784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:14.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:32:14.758447  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.758779  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:14.758833  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:15.258432  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.258504  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.258873  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.758697  826329 type.go:168] "Request Body" body=""
	I1208 00:32:15.758770  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.198619  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:16.257994  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.258110  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.258333  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.261663  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:16.261706  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.261724  826329 retry.go:31] will retry after 3.921003057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:16.758355  826329 type.go:168] "Request Body" body=""
	I1208 00:32:16.758442  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.758740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.258595  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.258667  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.259014  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:17.259070  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:17.398537  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:17.459046  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:17.459087  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.459108  826329 retry.go:31] will retry after 6.352068949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:17.758636  826329 type.go:168] "Request Body" body=""
	I1208 00:32:17.758713  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.759027  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.757758  826329 type.go:168] "Request Body" body=""
	I1208 00:32:18.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.758113  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.258205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.757895  826329 type.go:168] "Request Body" body=""
	I1208 00:32:19.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:19.758338  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:20.183008  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:20.244376  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.244427  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.244447  826329 retry.go:31] will retry after 4.642616038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:20.258603  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.258946  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:32:20.757858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.758256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:32:21.757997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.758369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:22.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.757950  826329 type.go:168] "Request Body" body=""
	I1208 00:32:22.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.758369  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.257963  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.258271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:32:23.758124  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.758456  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.758513  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.811708  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:23.877239  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:23.877286  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:23.877305  826329 retry.go:31] will retry after 3.991513365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.257726  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.757814  826329 type.go:168] "Request Body" body=""
	I1208 00:32:24.757890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.887652  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:24.946807  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:24.946870  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:24.946894  826329 retry.go:31] will retry after 6.868435312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:25.258372  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.258452  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.258751  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:25.758655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.759159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.759287  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:26.257937  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.258011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.258320  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:26.757849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.758164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.258591  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.758609  826329 type.go:168] "Request Body" body=""
	I1208 00:32:27.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.869339  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:27.929619  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:27.929669  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:27.929689  826329 retry.go:31] will retry after 5.640751927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:28.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.258197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:28.258246  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:32:28.757900  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.257906  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:32:29.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.758680  826329 type.go:168] "Request Body" body=""
	I1208 00:32:30.758746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.759010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:30.759051  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:31.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.258120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:32:31.757934  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.815479  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:31.877679  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:31.877725  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:31.877744  826329 retry.go:31] will retry after 9.288265427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:32.258204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.258274  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.258579  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:32:32.758594  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.758959  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.257805  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.258256  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:33.258316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:33.570705  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:33.628260  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:33.631756  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.631797  826329 retry.go:31] will retry after 7.380803559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:33.758003  826329 type.go:168] "Request Body" body=""
	I1208 00:32:33.758091  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.257826  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.257908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.757933  826329 type.go:168] "Request Body" body=""
	I1208 00:32:34.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.757723  826329 type.go:168] "Request Body" body=""
	I1208 00:32:35.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:35.758156  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:36.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:36.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.257953  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.258310  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.758204  826329 type.go:168] "Request Body" body=""
	I1208 00:32:37.758282  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.758636  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:37.758697  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:38.258444  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.258520  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.258964  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.758579  826329 type.go:168] "Request Body" body=""
	I1208 00:32:38.758657  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.758988  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.258591  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.259009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.757689  826329 type.go:168] "Request Body" body=""
	I1208 00:32:39.757764  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.758032  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.257724  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.257806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.258168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:40.258225  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:40.757812  826329 type.go:168] "Request Body" body=""
	I1208 00:32:40.757892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.013670  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:41.072281  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.076192  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.076223  826329 retry.go:31] will retry after 30.64284814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.166454  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:32:41.227404  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:41.227446  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.227466  826329 retry.go:31] will retry after 28.006603896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:32:41.258583  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.258655  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.758793  826329 type.go:168] "Request Body" body=""
	I1208 00:32:41.758886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.759193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.257895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.258236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:42.258293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:32:42.758154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.758523  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.258386  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.258459  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.258782  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.758542  826329 type.go:168] "Request Body" body=""
	I1208 00:32:43.758614  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.758961  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.258683  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.258759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:44.259091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.757800  826329 type.go:168] "Request Body" body=""
	I1208 00:32:44.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.758206  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.258097  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.259164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.757651  826329 type.go:168] "Request Body" body=""
	I1208 00:32:45.757746  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.758010  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.257735  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.257815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.258117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.757885  826329 type.go:168] "Request Body" body=""
	I1208 00:32:46.757969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.758288  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.758347  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:47.258326  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.258400  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.258685  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.758684  826329 type.go:168] "Request Body" body=""
	I1208 00:32:47.758763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.759114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.257709  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.257796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.757752  826329 type.go:168] "Request Body" body=""
	I1208 00:32:48.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.758123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:49.258218  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.757765  826329 type.go:168] "Request Body" body=""
	I1208 00:32:49.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.257803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:32:50.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.758188  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.258204  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.258253  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.757903  826329 type.go:168] "Request Body" body=""
	I1208 00:32:51.757978  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.758301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.757965  826329 type.go:168] "Request Body" body=""
	I1208 00:32:52.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.758392  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:32:53.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.758279  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:54.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.257882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.757818  826329 type.go:168] "Request Body" body=""
	I1208 00:32:54.757897  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.258277  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.757925  826329 type.go:168] "Request Body" body=""
	I1208 00:32:55.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.758403  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.258035  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.258362  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:32:56.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.258678  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.258763  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.259088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.757900  826329 type.go:168] "Request Body" body=""
	I1208 00:32:57.757974  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.258215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:58.258269  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:32:58.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.758311  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.257792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.258100  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.757787  826329 type.go:168] "Request Body" body=""
	I1208 00:32:59.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.257846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.758031  826329 type.go:168] "Request Body" body=""
	I1208 00:33:00.758108  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.258268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.757962  826329 type.go:168] "Request Body" body=""
	I1208 00:33:01.758033  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.257983  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.258055  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.258387  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.258456  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:02.757985  826329 type.go:168] "Request Body" body=""
	I1208 00:33:02.758059  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.258055  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.258125  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.258438  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:03.757882  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.257989  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.258481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:04.758118  826329 type.go:168] "Request Body" body=""
	I1208 00:33:04.758201  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.758485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.258270  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.758448  826329 type.go:168] "Request Body" body=""
	I1208 00:33:05.758527  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.758934  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.257684  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.257772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.258049  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.757785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:06.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:06.758206  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.258726  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.258824  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.259215  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:07.758011  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.758271  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.257849  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:33:08.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.758171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:08.758228  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:09.234960  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:09.258398  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.258467  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.258726  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.299771  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:09.299811  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.299830  826329 retry.go:31] will retry after 22.917133282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:09.758561  826329 type.go:168] "Request Body" body=""
	I1208 00:33:09.758640  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.758995  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.258770  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.258868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.259197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.757838  826329 type.go:168] "Request Body" body=""
	I1208 00:33:10.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.758190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.257813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:11.258179  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:11.719678  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:11.758124  826329 type.go:168] "Request Body" body=""
	I1208 00:33:11.758203  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.758476  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.779600  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:11.783324  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:11.783357  826329 retry.go:31] will retry after 27.574784486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:12.257740  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.258104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:33:12.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.257894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.258219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:13.258272  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:13.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:33:13.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:33:14.757988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.257958  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.258037  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.258315  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:15.258360  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:15.757919  826329 type.go:168] "Request Body" body=""
	I1208 00:33:15.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.257870  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:33:16.757879  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.257963  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.258036  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.258357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:17.258414  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.758272  826329 type.go:168] "Request Body" body=""
	I1208 00:33:17.758354  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.758668  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.258406  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.258487  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.258798  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.758471  826329 type.go:168] "Request Body" body=""
	I1208 00:33:18.758544  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.758891  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.258691  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.258772  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.259134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.259190  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:33:19.757739  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.758088  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:33:20.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.757870  826329 type.go:168] "Request Body" body=""
	I1208 00:33:21.757943  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.758290  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.257808  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.757993  826329 type.go:168] "Request Body" body=""
	I1208 00:33:22.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.758417  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:33:23.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.257852  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.258182  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:24.258220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.757878  826329 type.go:168] "Request Body" body=""
	I1208 00:33:24.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.758349  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.258345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:33:25.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.257811  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.258284  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:33:26.758040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.758399  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.258252  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.258330  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.258588  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.758645  826329 type.go:168] "Request Body" body=""
	I1208 00:33:27.758735  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.759079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:33:28.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.758067  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.758108  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:29.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.757789  826329 type.go:168] "Request Body" body=""
	I1208 00:33:29.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.257875  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.257941  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.258210  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.757889  826329 type.go:168] "Request Body" body=""
	I1208 00:33:30.757960  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.758308  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.257774  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.757714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:31.757784  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.758087  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.217681  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:33:32.258110  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.258185  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.258497  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.272413  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:32.276021  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.276065  826329 retry.go:31] will retry after 31.830018043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:33:32.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:33:32.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.758299  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.758362  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.258151  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.258517  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.758371  826329 type.go:168] "Request Body" body=""
	I1208 00:33:33.758451  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.258598  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.258670  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.259035  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.758635  826329 type.go:168] "Request Body" body=""
	I1208 00:33:34.758714  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.759056  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.257714  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.258111  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:35.757946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.758267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.257939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.757821  826329 type.go:168] "Request Body" body=""
	I1208 00:33:36.757891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.258214  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.258289  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.258578  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:37.258623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.758354  826329 type.go:168] "Request Body" body=""
	I1208 00:33:37.758421  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.758674  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.258403  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.258497  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.258867  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.758486  826329 type.go:168] "Request Body" body=""
	I1208 00:33:38.758558  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.758906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.258694  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.258758  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.259030  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.259072  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.358376  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:33:39.412374  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416050  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:33:39.416143  826329 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:33:39.758638  826329 type.go:168] "Request Body" body=""
	I1208 00:33:39.758720  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.759108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.757846  826329 type.go:168] "Request Body" body=""
	I1208 00:33:40.757931  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.257809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.257898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.757977  826329 type.go:168] "Request Body" body=""
	I1208 00:33:41.758050  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.758344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.758393  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:42.258098  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.258182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.258488  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.758485  826329 type.go:168] "Request Body" body=""
	I1208 00:33:42.758557  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.758915  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.258576  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.258649  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.258992  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.757700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:43.757773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.758038  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.257757  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.258132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:44.258184  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.757809  826329 type.go:168] "Request Body" body=""
	I1208 00:33:44.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:33:45.757999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.758336  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.258084  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.258468  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:46.258519  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.758126  826329 type.go:168] "Request Body" body=""
	I1208 00:33:46.758195  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.758462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.258480  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.258906  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:47.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.758307  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.257842  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.258167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:33:48.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.758219  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.758291  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:49.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.258184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.757854  826329 type.go:168] "Request Body" body=""
	I1208 00:33:49.757922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.757790  826329 type.go:168] "Request Body" body=""
	I1208 00:33:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.257971  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.258282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:51.258346  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.757834  826329 type.go:168] "Request Body" body=""
	I1208 00:33:51.757908  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:33:52.758182  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.758452  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.258459  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.258556  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.258900  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:53.258955  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.758700  826329 type.go:168] "Request Body" body=""
	I1208 00:33:53.758780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.759083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.258123  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.757773  826329 type.go:168] "Request Body" body=""
	I1208 00:33:54.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.758170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:33:55.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:55.758182  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:56.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.757939  826329 type.go:168] "Request Body" body=""
	I1208 00:33:56.758018  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.758340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.258337  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.258409  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.258677  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.758592  826329 type.go:168] "Request Body" body=""
	I1208 00:33:57.758683  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.759000  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:57.759063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:58.257674  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.257773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.757693  826329 type.go:168] "Request Body" body=""
	I1208 00:33:58.757771  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.758081  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.258187  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:33:59.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.758199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.265698  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.265780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.266096  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:00.266143  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:00.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:00.757872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.258053  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:34:01.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.257892  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.258340  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.758185  826329 type.go:168] "Request Body" body=""
	I1208 00:34:02.758273  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.758590  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:02.758643  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:03.258621  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.258702  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:34:03.757895  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.758191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.106865  826329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:34:04.166273  826329 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166323  826329 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:34:04.166403  826329 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:34:04.169502  826329 out.go:179] * Enabled addons: 
	I1208 00:34:04.171536  826329 addons.go:530] duration metric: took 1m56.935875389s for enable addons: enabled=[]
	I1208 00:34:04.258604  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.258682  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.259013  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.758662  826329 type.go:168] "Request Body" body=""
	I1208 00:34:04.758731  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.759011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:04.759062  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:05.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.258200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:34:05.758048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.758370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.257730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.258101  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.758131  826329 type.go:168] "Request Body" body=""
	I1208 00:34:06.758204  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.758570  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.258500  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.258586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.258950  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:07.259055  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:07.757997  826329 type.go:168] "Request Body" body=""
	I1208 00:34:07.758070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.758357  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:08.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.257713  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.257788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.258063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:34:09.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.758195  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:09.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:10.257921  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.258005  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.258346  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.757735  826329 type.go:168] "Request Body" body=""
	I1208 00:34:10.757804  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.758062  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.757910  826329 type.go:168] "Request Body" body=""
	I1208 00:34:11.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.758309  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:11.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:12.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.258075  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.258391  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.757907  826329 type.go:168] "Request Body" body=""
	I1208 00:34:12.757979  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.258000  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.258079  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.757976  826329 type.go:168] "Request Body" body=""
	I1208 00:34:13.758046  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.758318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.257787  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:14.258216  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:14.757792  826329 type.go:168] "Request Body" body=""
	I1208 00:34:14.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:34:15.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.758229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.257940  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.258013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.258338  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:16.258395  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:16.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:34:16.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.758127  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.258701  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.258775  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.259137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.757896  826329 type.go:168] "Request Body" body=""
	I1208 00:34:17.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.758282  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.257973  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.258048  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.757762  826329 type.go:168] "Request Body" body=""
	I1208 00:34:18.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:18.758243  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.258352  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.758033  826329 type.go:168] "Request Body" body=""
	I1208 00:34:19.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.758409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.757890  826329 type.go:168] "Request Body" body=""
	I1208 00:34:20.757981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.758323  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:20.758384  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:21.257944  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.258010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.258262  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:34:21.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.758322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.257850  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.257925  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.258270  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.758019  826329 type.go:168] "Request Body" body=""
	I1208 00:34:22.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.758365  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:22.758408  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:23.258071  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.258151  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.258491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.758281  826329 type.go:168] "Request Body" body=""
	I1208 00:34:23.758363  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.758707  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.258477  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.258561  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.258932  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:24.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.759183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.759247  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:25.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.258000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:34:25.757806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.758120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.258248  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.757971  826329 type.go:168] "Request Body" body=""
	I1208 00:34:26.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.758380  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.258327  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.258401  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.258666  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:27.258716  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.758723  826329 type.go:168] "Request Body" body=""
	I1208 00:34:27.758798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.759103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.258140  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:34:28.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.257952  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.258027  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.258370  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.758085  826329 type.go:168] "Request Body" body=""
	I1208 00:34:29.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.758508  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:29.758566  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:30.258264  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.258340  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.258608  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.758360  826329 type.go:168] "Request Body" body=""
	I1208 00:34:30.758437  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.758793  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.258627  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.258701  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.259047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:34:31.757815  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.758076  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.257780  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:32.258235  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:32.758097  826329 type.go:168] "Request Body" body=""
	I1208 00:34:32.758176  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.258283  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.258362  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.258621  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.758421  826329 type.go:168] "Request Body" body=""
	I1208 00:34:33.758509  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.758874  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.258773  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.259148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:34.259210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.757843  826329 type.go:168] "Request Body" body=""
	I1208 00:34:34.757921  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.757916  826329 type.go:168] "Request Body" body=""
	I1208 00:34:35.757995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.758360  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.257977  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.258049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:34:36.757866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.758233  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:37.257891  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.257964  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.258296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.758129  826329 type.go:168] "Request Body" body=""
	I1208 00:34:37.758200  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.758490  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.258191  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.258269  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.258634  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.758454  826329 type.go:168] "Request Body" body=""
	I1208 00:34:38.758534  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.758898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.758959  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:39.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.258627  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.258916  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.758708  826329 type.go:168] "Request Body" body=""
	I1208 00:34:39.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.759139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.257796  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.258223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.757783  826329 type.go:168] "Request Body" body=""
	I1208 00:34:40.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.758212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.257845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:41.258249  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:41.757913  826329 type.go:168] "Request Body" body=""
	I1208 00:34:41.757994  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.758308  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.758011  826329 type.go:168] "Request Body" body=""
	I1208 00:34:42.758104  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.758449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.258150  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.258227  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.258566  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:43.258632  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.758358  826329 type.go:168] "Request Body" body=""
	I1208 00:34:43.758430  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.758722  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.258546  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.259073  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:34:44.757871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.257935  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.258485  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.758673  826329 type.go:168] "Request Body" body=""
	I1208 00:34:45.758756  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.759202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:46.257864  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.257946  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.258291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.758013  826329 type.go:168] "Request Body" body=""
	I1208 00:34:46.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.758428  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.258513  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.258598  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.259004  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.757974  826329 type.go:168] "Request Body" body=""
	I1208 00:34:47.758047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.257839  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.258125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:48.258175  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.757743  826329 type.go:168] "Request Body" body=""
	I1208 00:34:48.757816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.758138  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.257906  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:34:49.757829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.758137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.257875  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:50.258267  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.757934  826329 type.go:168] "Request Body" body=""
	I1208 00:34:50.758014  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.758361  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.258044  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.258119  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.258431  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.758821  826329 type.go:168] "Request Body" body=""
	I1208 00:34:51.758917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.759213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.757986  826329 type.go:168] "Request Body" body=""
	I1208 00:34:52.758060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.758375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:52.758428  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:53.257769  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:53.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.758227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.257791  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.757810  826329 type.go:168] "Request Body" body=""
	I1208 00:34:54.757886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.758249  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.257839  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.257917  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:55.258313  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:55.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:34:55.757796  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:34:56.757854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.758141  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.258322  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:57.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:57.758246  826329 type.go:168] "Request Body" body=""
	I1208 00:34:57.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.758647  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.258478  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.258560  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.258910  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.758706  826329 type.go:168] "Request Body" body=""
	I1208 00:34:58.758782  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.759102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.257905  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.258259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:34:59.758063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.758436  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:59.758494  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:00.270583  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.271106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.271544  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.758373  826329 type.go:168] "Request Body" body=""
	I1208 00:35:00.758448  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.758792  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.258597  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.258676  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.259052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.757784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:01.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.257942  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.258019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.258319  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:02.258369  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:02.758254  826329 type.go:168] "Request Body" body=""
	I1208 00:35:02.758335  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.758657  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.258485  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.258576  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.258926  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:03.757769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.258084  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.757820  826329 type.go:168] "Request Body" body=""
	I1208 00:35:04.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:04.758220  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:05.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.257988  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.258274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:35:05.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.758110  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.257890  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.258218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:35:06.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.758268  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:07.258187  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.258264  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.258524  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.758503  826329 type.go:168] "Request Body" body=""
	I1208 00:35:07.758579  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.758911  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.258711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.258788  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.259165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:08.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.758114  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:09.258314  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:09.757867  826329 type.go:168] "Request Body" body=""
	I1208 00:35:09.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.257728  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.258179  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:35:10.757861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.758154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.257828  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.257901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:35:11.757977  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:11.758292  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.257877  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.757929  826329 type.go:168] "Request Body" body=""
	I1208 00:35:12.758010  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.758331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.257734  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.257816  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.258128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.757740  826329 type.go:168] "Request Body" body=""
	I1208 00:35:13.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.758156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.257879  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.257958  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:14.258372  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:14.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:35:14.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.258220  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:35:15.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.257844  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.257920  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.258226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:16.757850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.758201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:16.758262  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:17.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.258017  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.258355  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.758047  826329 type.go:168] "Request Body" body=""
	I1208 00:35:17.758126  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.257797  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.258225  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.757982  826329 type.go:168] "Request Body" body=""
	I1208 00:35:18.758084  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:18.758496  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:19.258078  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.258148  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.258462  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:19.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.758152  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.257773  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.257847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.258174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.757731  826329 type.go:168] "Request Body" body=""
	I1208 00:35:20.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.758079  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.257818  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.257902  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:21.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:21.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:35:21.757893  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.758255  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.258007  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.258298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.757958  826329 type.go:168] "Request Body" body=""
	I1208 00:35:22.758029  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.758379  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.257782  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.257861  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.258186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.757721  826329 type.go:168] "Request Body" body=""
	I1208 00:35:23.757792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:23.758157  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.257832  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.257916  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.258224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.757747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:24.757838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.758162  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.257741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.257814  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.258153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.757849  826329 type.go:168] "Request Body" body=""
	I1208 00:35:25.757923  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:25.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.257792  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.257867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.258190  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.757716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:26.757791  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.758047  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.257747  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.257826  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.258159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.757938  826329 type.go:168] "Request Body" body=""
	I1208 00:35:27.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.758339  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:27.758399  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:28.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.257817  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.258135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:35:28.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.758185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.257754  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.257836  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.757884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:29.757957  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.758247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.257943  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.258020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.258359  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:30.258416  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:30.758069  826329 type.go:168] "Request Body" body=""
	I1208 00:35:30.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.758447  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.257716  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.258108  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.757788  826329 type.go:168] "Request Body" body=""
	I1208 00:35:31.757859  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.758213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.257931  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.258342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.758262  826329 type.go:168] "Request Body" body=""
	I1208 00:35:32.758329  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.758582  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:32.758623  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.258445  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.258519  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.258864  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:35:33.758759  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.759120  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.257806  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.258192  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.757780  826329 type.go:168] "Request Body" body=""
	I1208 00:35:34.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.257854  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.258243  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.258302  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:35.757946  826329 type.go:168] "Request Body" body=""
	I1208 00:35:35.758019  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.758342  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.258034  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.258106  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.758092  826329 type.go:168] "Request Body" body=""
	I1208 00:35:36.758170  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.758498  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.258371  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.258441  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.258740  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.258804  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:37.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:35:37.758737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.759093  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.258189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.758009  826329 type.go:168] "Request Body" body=""
	I1208 00:35:38.758085  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.758354  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.257846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.258253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.758008  826329 type.go:168] "Request Body" body=""
	I1208 00:35:39.758083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.758427  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:39.758481  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.257851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.258151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.757767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:40.757846  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.758147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.257838  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.257911  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.258244  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.757920  826329 type.go:168] "Request Body" body=""
	I1208 00:35:41.757992  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.758263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.257833  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.257922  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.258385  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.258459  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.758115  826329 type.go:168] "Request Body" body=""
	I1208 00:35:42.758189  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.758495  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.258231  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.258593  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.758356  826329 type.go:168] "Request Body" body=""
	I1208 00:35:43.758433  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.758767  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.258451  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.258526  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.258817  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.258887  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:44.758589  826329 type.go:168] "Request Body" body=""
	I1208 00:35:44.758661  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.758935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.257830  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:35:45.757933  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.758313  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.257995  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.258070  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.258330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:46.757844  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.758227  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.258305  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.757930  826329 type.go:168] "Request Body" body=""
	I1208 00:35:47.758004  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.758268  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.757753  826329 type.go:168] "Request Body" body=""
	I1208 00:35:48.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.758174  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.258251  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.757923  826329 type.go:168] "Request Body" body=""
	I1208 00:35:49.758020  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.758330  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.258077  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.258159  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.258484  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:35:50.757837  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.758102  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.258133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:51.757936  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.758234  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.758281  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.257817  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.257892  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.758053  826329 type.go:168] "Request Body" body=""
	I1208 00:35:52.758141  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.758433  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.258161  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.258233  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.258558  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.758318  826329 type.go:168] "Request Body" body=""
	I1208 00:35:53.758393  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.758646  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.758686  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.258483  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.258562  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.258917  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.758694  826329 type.go:168] "Request Body" body=""
	I1208 00:35:54.758792  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.759186  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.257832  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.258147  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.757691  826329 type.go:168] "Request Body" body=""
	I1208 00:35:55.757780  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.758109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.257711  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.258202  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.757858  826329 type.go:168] "Request Body" body=""
	I1208 00:35:56.757927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.257884  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.257966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.258314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.758093  826329 type.go:168] "Request Body" body=""
	I1208 00:35:57.758166  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.758502  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.258229  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.258304  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.258576  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.258619  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:58.758339  826329 type.go:168] "Request Body" body=""
	I1208 00:35:58.758413  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.758719  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.258566  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.258656  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.259028  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:35:59.757811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.758074  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.258301  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.757822  826329 type.go:168] "Request Body" body=""
	I1208 00:36:00.757896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.758184  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:00.758231  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.257745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.258119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.757771  826329 type.go:168] "Request Body" body=""
	I1208 00:36:01.757848  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.758161  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.257756  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.258170  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.757970  826329 type.go:168] "Request Body" body=""
	I1208 00:36:02.758045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:02.758357  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:03.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.258175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.757799  826329 type.go:168] "Request Body" body=""
	I1208 00:36:03.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.758980  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:36:04.257702  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.257786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.258057  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:04.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.758149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.257856  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.258006  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.258344  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:05.757874  826329 type.go:168] "Request Body" body=""
	I1208 00:36:05.757952  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.758274  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.257951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.258024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.258331  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.757806  826329 type.go:168] "Request Body" body=""
	I1208 00:36:06.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.758228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.258156  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.258257  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.258603  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:07.258657  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:07.758639  826329 type.go:168] "Request Body" body=""
	I1208 00:36:07.758722  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.759070  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.257829  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.757734  826329 type.go:168] "Request Body" body=""
	I1208 00:36:08.757812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.257802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.257878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:36:09.758023  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.758383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:09.758454  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:10.258096  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.258168  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.258420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:10.757867  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.758202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.257926  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.258015  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.758043  826329 type.go:168] "Request Body" body=""
	I1208 00:36:11.758118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.758421  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.257886  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.258212  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:12.258271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:12.758147  826329 type.go:168] "Request Body" body=""
	I1208 00:36:12.758239  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.758564  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.258372  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.258650  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.758403  826329 type.go:168] "Request Body" body=""
	I1208 00:36:13.758476  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.758795  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.258438  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.258516  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.258865  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:14.258923  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:14.758558  826329 type.go:168] "Request Body" body=""
	I1208 00:36:14.758632  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.758960  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.257698  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.257781  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:36:15.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.257941  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.258012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.258318  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.757778  826329 type.go:168] "Request Body" body=""
	I1208 00:36:16.757852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.758196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:16.758250  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:17.257965  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.258040  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.258353  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.757949  826329 type.go:168] "Request Body" body=""
	I1208 00:36:17.758021  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.257775  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.257850  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.258171  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.757802  826329 type.go:168] "Request Body" body=""
	I1208 00:36:18.757883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.758209  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.257767  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.257838  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.258195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:19.757899  826329 type.go:168] "Request Body" body=""
	I1208 00:36:19.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.758306  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.257881  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.258258  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.757816  826329 type.go:168] "Request Body" body=""
	I1208 00:36:20.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.758178  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.257800  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.257883  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.258213  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.258270  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:21.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:21.758028  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.758372  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.258048  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.258121  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.258383  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.757988  826329 type.go:168] "Request Body" body=""
	I1208 00:36:22.758096  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.758420  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:23.258320  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:23.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:36:23.758051  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.758371  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.258081  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.258162  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.258509  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.758321  826329 type.go:168] "Request Body" body=""
	I1208 00:36:24.758398  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.758744  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.258469  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.258537  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.258876  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:25.258924  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:25.758650  826329 type.go:168] "Request Body" body=""
	I1208 00:36:25.758727  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.759090  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.258185  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.757875  826329 type.go:168] "Request Body" body=""
	I1208 00:36:26.757942  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.758194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.257841  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.257927  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:27.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.758332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:27.758386  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:28.257969  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.258045  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.258295  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:28.758107  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.758437  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.258229  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:29.757822  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.758078  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.257824  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.257913  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.258261  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.258331  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:30.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:36:30.757915  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.758211  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.257869  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.257937  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.757769  826329 type.go:168] "Request Body" body=""
	I1208 00:36:31.757841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.758144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.257781  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.258166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.757940  826329 type.go:168] "Request Body" body=""
	I1208 00:36:32.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.758265  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:32.758305  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.257772  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.257856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.258196  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:36:33.757888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.758193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.257750  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.258142  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.757815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:34.757887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.758218  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.257918  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.257997  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.258317  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.258379  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:35.757745  826329 type.go:168] "Request Body" body=""
	I1208 00:36:35.757819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.758135  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.257783  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.257858  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.258193  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:36.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.758166  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.258659  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.258733  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.259043  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.259083  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:37.757951  826329 type.go:168] "Request Body" body=""
	I1208 00:36:37.758024  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.758345  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.257874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.258227  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.757932  826329 type.go:168] "Request Body" body=""
	I1208 00:36:38.758013  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.758289  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.257801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.258238  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.757952  826329 type.go:168] "Request Body" body=""
	I1208 00:36:39.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.758378  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:39.758433  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.257793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.258042  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.757726  826329 type.go:168] "Request Body" body=""
	I1208 00:36:40.757803  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.257744  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.257823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.258154  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:36:41.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.758133  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.257815  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.258239  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.258298  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:42.758027  826329 type.go:168] "Request Body" body=""
	I1208 00:36:42.758111  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.758448  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.257743  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.258130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.757851  826329 type.go:168] "Request Body" body=""
	I1208 00:36:43.757926  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.758259  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.257964  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.258047  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.258406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.258465  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:44.757755  826329 type.go:168] "Request Body" body=""
	I1208 00:36:44.757827  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.758128  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.257829  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.257930  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.258337  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:45.757876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.758253  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.257794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.258137  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.757749  826329 type.go:168] "Request Body" body=""
	I1208 00:36:46.757828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.758175  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:46.758229  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.257908  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.257985  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.258332  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.757967  826329 type.go:168] "Request Body" body=""
	I1208 00:36:47.758039  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.758296  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.257872  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.258199  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.757801  826329 type.go:168] "Request Body" body=""
	I1208 00:36:48.757878  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.758214  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:48.758271  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:49.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.257871  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.757819  826329 type.go:168] "Request Body" body=""
	I1208 00:36:49.757898  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:49.758237  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.257786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.257865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:50.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:36:50.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:50.758139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:51.257803  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.257880  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.258144  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:51.258193  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:51.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:36:51.757868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:51.758200  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.257870  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.258287  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:52.758014  826329 type.go:168] "Request Body" body=""
	I1208 00:36:52.758090  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:52.758414  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:53.258138  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.258234  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.258594  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:53.258654  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:53.757742  826329 type.go:168] "Request Body" body=""
	I1208 00:36:53.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:53.758121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.257766  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:54.757770  826329 type.go:168] "Request Body" body=""
	I1208 00:36:54.757856  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:54.758223  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.257895  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.257969  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.258267  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:55.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:36:55.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:55.758150  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:55.758195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:56.257784  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.257862  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.258194  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:56.757733  826329 type.go:168] "Request Body" body=""
	I1208 00:36:56.757805  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:56.758064  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.258687  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.258769  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.259122  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:57.757909  826329 type.go:168] "Request Body" body=""
	I1208 00:36:57.757984  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:57.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:57.758349  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:58.257827  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.257904  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:58.757793  826329 type.go:168] "Request Body" body=""
	I1208 00:36:58.757865  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:58.758197  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.257858  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.257940  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.258273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:59.757944  826329 type.go:168] "Request Body" body=""
	I1208 00:36:59.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:59.758280  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:00.257988  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.258083  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.258409  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:00.258457  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:00.758379  826329 type.go:168] "Request Body" body=""
	I1208 00:37:00.758466  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:00.758803  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.258644  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.258737  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.259037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:01.757751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:01.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:01.758132  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.257807  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.257884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.258191  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:02.757942  826329 type.go:168] "Request Body" body=""
	I1208 00:37:02.758012  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:02.758275  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:02.758316  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:03.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.257863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.258232  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:03.757961  826329 type.go:168] "Request Body" body=""
	I1208 00:37:03.758042  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:03.758415  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.258085  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.258154  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.258494  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:04.758211  826329 type.go:168] "Request Body" body=""
	I1208 00:37:04.758302  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:04.758664  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:04.758720  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:05.258496  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.258572  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.258935  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:05.757664  826329 type.go:168] "Request Body" body=""
	I1208 00:37:05.757745  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:05.758009  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.257731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.257811  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.258149  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:06.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:06.757928  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:06.758260  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:07.258197  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.258266  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.258533  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:07.258574  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:07.758487  826329 type.go:168] "Request Body" body=""
	I1208 00:37:07.758564  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:07.758919  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.258731  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.258806  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.259157  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:08.757712  826329 type.go:168] "Request Body" body=""
	I1208 00:37:08.757783  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:08.758052  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.257785  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.257857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.258155  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:09.757797  826329 type.go:168] "Request Body" body=""
	I1208 00:37:09.757874  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:09.758285  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:09.758354  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:10.257742  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.257812  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.258068  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:10.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:10.757847  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:10.758172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.257795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.257869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:11.757777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:11.757851  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:11.758165  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:12.257867  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.257950  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.258272  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:12.258328  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:12.758227  826329 type.go:168] "Request Body" body=""
	I1208 00:37:12.758306  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:12.758623  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.258376  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.258454  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.258723  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:13.758551  826329 type.go:168] "Request Body" body=""
	I1208 00:37:13.758624  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:13.758979  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.257722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.258121  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:14.757754  826329 type.go:168] "Request Body" body=""
	I1208 00:37:14.757823  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:14.758159  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:14.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:15.257768  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.257841  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:15.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:15.757863  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:15.758236  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.257917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.258276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:16.757798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:16.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:16.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:16.758276  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:17.257980  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.258060  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:17.757978  826329 type.go:168] "Request Body" body=""
	I1208 00:37:17.758049  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:17.758343  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.257799  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.257887  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.258231  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:18.757795  826329 type.go:168] "Request Body" body=""
	I1208 00:37:18.757884  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:18.758230  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:19.257736  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.257808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.258129  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:19.258185  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:19.757760  826329 type.go:168] "Request Body" body=""
	I1208 00:37:19.757842  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:19.758169  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.257828  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.258148  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:20.757722  826329 type.go:168] "Request Body" body=""
	I1208 00:37:20.757789  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:20.758063  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:21.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:21.258238  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:21.757917  826329 type.go:168] "Request Body" body=""
	I1208 00:37:21.758000  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:21.758316  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.257738  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.257820  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.258134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:22.758012  826329 type.go:168] "Request Body" body=""
	I1208 00:37:22.758097  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:22.758430  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.257876  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.258177  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:23.757830  826329 type.go:168] "Request Body" body=""
	I1208 00:37:23.757901  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:23.758240  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:23.758293  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:24.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.257866  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.258217  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:24.757779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:24.757860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:24.758189  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.257753  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.257835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.258103  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:25.757791  826329 type.go:168] "Request Body" body=""
	I1208 00:37:25.757873  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:25.758180  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:26.257798  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.257885  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.258263  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:26.258318  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:26.757964  826329 type.go:168] "Request Body" body=""
	I1208 00:37:26.758030  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:26.758273  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.258297  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.258369  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.258691  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:27.758719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:27.758793  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:27.759134  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.257751  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.257821  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.258083  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:28.757759  826329 type.go:168] "Request Body" body=""
	I1208 00:37:28.757834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:28.758151  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:28.758210  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:29.257770  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.257843  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.258164  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:29.757719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:29.757786  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:29.758037  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.257777  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.258173  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:30.757761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:30.757835  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:30.758153  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:31.257719  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.257787  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.258040  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:31.258078  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:31.757746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:31.757831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:31.758167  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.257904  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.257981  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.258329  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:32.758087  826329 type.go:168] "Request Body" body=""
	I1208 00:37:32.758153  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:32.758406  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:33.257779  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.257860  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.258158  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:33.258205  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:33.757872  826329 type.go:168] "Request Body" body=""
	I1208 00:37:33.757959  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:33.758300  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.257922  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.257990  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.258252  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:34.757741  826329 type.go:168] "Request Body" body=""
	I1208 00:37:34.757813  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:34.758130  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:35.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.257853  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.258198  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:35.258259  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:35.757729  826329 type.go:168] "Request Body" body=""
	I1208 00:37:35.757808  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:35.758125  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.257765  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.257840  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:36.757786  826329 type.go:168] "Request Body" body=""
	I1208 00:37:36.757864  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:36.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:37.258028  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.258098  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.258344  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:37.258383  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:37.757945  826329 type.go:168] "Request Body" body=""
	I1208 00:37:37.758016  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:37.758350  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.257793  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.258202  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:38.757892  826329 type.go:168] "Request Body" body=""
	I1208 00:37:38.757966  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:38.758224  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.257746  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.257819  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.258172  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:39.757781  826329 type.go:168] "Request Body" body=""
	I1208 00:37:39.757857  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:39.758205  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:39.758261  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:40.257896  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.257976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.258247  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:40.757794  826329 type.go:168] "Request Body" body=""
	I1208 00:37:40.757869  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:40.758250  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.257776  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.257852  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:41.757732  826329 type.go:168] "Request Body" body=""
	I1208 00:37:41.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:41.758046  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:42.257804  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.257891  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.258257  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:42.258317  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:42.758046  826329 type.go:168] "Request Body" body=""
	I1208 00:37:42.758145  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:42.758527  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.258300  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.258368  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.258629  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:43.758381  826329 type.go:168] "Request Body" body=""
	I1208 00:37:43.758456  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:43.758773  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:44.258642  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.258728  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.259104  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:44.259162  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:44.757666  826329 type.go:168] "Request Body" body=""
	I1208 00:37:44.757747  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:44.758033  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.257929  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.258118  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.258898  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:45.758678  826329 type.go:168] "Request Body" body=""
	I1208 00:37:45.758751  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:45.759069  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:46.258690  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.258765  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.259139  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:46.259195  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:46.757764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:46.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:46.758163  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.258180  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.258255  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.258575  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:47.757955  826329 type.go:168] "Request Body" body=""
	I1208 00:37:47.758026  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:47.758294  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.257778  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.257855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.258181  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:48.757898  826329 type.go:168] "Request Body" body=""
	I1208 00:37:48.757975  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:48.758298  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:48.758358  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:49.257739  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.257818  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.258126  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:49.757824  826329 type.go:168] "Request Body" body=""
	I1208 00:37:49.757899  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:49.758221  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.257788  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.257868  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.258201  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:50.757901  826329 type.go:168] "Request Body" body=""
	I1208 00:37:50.757976  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:50.758245  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:51.257764  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.257834  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.258183  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:51.258245  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:51.757772  826329 type.go:168] "Request Body" body=""
	I1208 00:37:51.757845  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:51.758176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.257835  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.257907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.258160  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:52.757998  826329 type.go:168] "Request Body" body=""
	I1208 00:37:52.758067  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:52.758400  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.257761  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.257831  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.258156  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:53.757730  826329 type.go:168] "Request Body" body=""
	I1208 00:37:53.757801  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:53.758051  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:53.758091  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:54.257814  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.257889  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.258241  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:54.757811  826329 type.go:168] "Request Body" body=""
	I1208 00:37:54.757894  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:54.758226  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.257720  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.257799  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.258107  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:55.757840  826329 type.go:168] "Request Body" body=""
	I1208 00:37:55.757929  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:55.758276  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:55.758329  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:56.257991  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.258063  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.258375  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:56.757728  826329 type.go:168] "Request Body" body=""
	I1208 00:37:56.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:56.758080  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.257836  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.257909  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.258228  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:57.757928  826329 type.go:168] "Request Body" body=""
	I1208 00:37:57.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:57.758314  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:37:57.758371  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:37:58.257725  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.257797  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.258109  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:58.757817  826329 type.go:168] "Request Body" body=""
	I1208 00:37:58.757907  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:58.758235  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.257927  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.257999  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.258328  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:37:59.757848  826329 type.go:168] "Request Body" body=""
	I1208 00:37:59.757914  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:37:59.758168  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:00.257912  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.257995  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.258367  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:00.258421  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:00.758080  826329 type.go:168] "Request Body" body=""
	I1208 00:38:00.758156  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:00.758491  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.258328  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.258416  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.258737  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:01.758513  826329 type.go:168] "Request Body" body=""
	I1208 00:38:01.758586  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:01.758951  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.257691  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.257768  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.258118  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:02.757931  826329 type.go:168] "Request Body" body=""
	I1208 00:38:02.758008  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:02.758286  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:02.758341  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:03.258024  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.258103  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.258449  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:03.758162  826329 type.go:168] "Request Body" body=""
	I1208 00:38:03.758236  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:03.758778  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.258558  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.258630  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.258999  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:04.757698  826329 type.go:168] "Request Body" body=""
	I1208 00:38:04.757798  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:04.758119  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:05.257820  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.257896  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.258242  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:05.258295  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:05.757768  826329 type.go:168] "Request Body" body=""
	I1208 00:38:05.757833  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:05.758117  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.257819  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.257888  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.258176  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:06.757775  826329 type.go:168] "Request Body" body=""
	I1208 00:38:06.757855  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:06.758222  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:07.262532  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.262623  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.263011  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:38:07.263063  826329 node_ready.go:55] error getting node "functional-525396" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-525396": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:38:07.757922  826329 type.go:168] "Request Body" body=""
	I1208 00:38:07.758001  826329 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-525396" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:38:07.758291  826329 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:38:08.257967  826329 node_ready.go:38] duration metric: took 6m0.00040399s for node "functional-525396" to be "Ready" ...
	I1208 00:38:08.261085  826329 out.go:203] 
	W1208 00:38:08.263874  826329 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:38:08.263896  826329 out.go:285] * 
	W1208 00:38:08.266040  826329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:38:08.269117  826329 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:16 functional-525396 crio[5366]: time="2025-12-08T00:38:16.949491859Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f4eaf628-6de6-4466-aca7-624d7f3b6914 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053265535Z" level=info msg="Checking image status: minikube-local-cache-test:functional-525396" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053468212Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053513136Z" level=info msg="Image minikube-local-cache-test:functional-525396 not found" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.053588993Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-525396 found" id=4354e420-bfce-4a9f-ba86-cab9a320df36 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082147724Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-525396" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082297355Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-525396 not found" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.082342336Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-525396 found" id=f31a74d2-df28-4627-b11e-9b92846df63d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.10804915Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-525396" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.108198223Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-525396 not found" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:18 functional-525396 crio[5366]: time="2025-12-08T00:38:18.108238174Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-525396 found" id=595e9b2f-54e7-436d-a8cd-5006b3a42abf name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.09750356Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c94e4ad-c4a7-48fb-b79c-98d473974851 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429399874Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429582251Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.429631703Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2856dd12-75ce-43d3-9da2-47851d826181 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987727152Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987886792Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:19 functional-525396 crio[5366]: time="2025-12-08T00:38:19.987947191Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2de1d565-d8a5-4786-a365-72b297636039 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.023951286Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.024080657Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.024115136Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9b2d4f3e-d6df-453a-af2d-37a16f111390 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.071842765Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.072030098Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.072079904Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=48377f59-4200-45f9-afae-aa2039ba49ea name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:38:20 functional-525396 crio[5366]: time="2025-12-08T00:38:20.606102194Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=92ba9907-b69c-4125-b030-0d1648257605 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:24.790490    9550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:24.791409    9550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:24.792368    9550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:24.794154    9550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:24.794611    9550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:38:24 up  5:20,  0 user,  load average: 0.69, 0.34, 0.70
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:38:22 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 08 00:38:23 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:23 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:23 functional-525396 kubelet[9422]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:23 functional-525396 kubelet[9422]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:23 functional-525396 kubelet[9422]: E1208 00:38:23.080464    9422 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 08 00:38:23 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:23 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:23 functional-525396 kubelet[9456]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:23 functional-525396 kubelet[9456]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:23 functional-525396 kubelet[9456]: E1208 00:38:23.796679    9456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:23 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:38:24 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 08 00:38:24 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:24 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:38:24 functional-525396 kubelet[9486]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:24 functional-525396 kubelet[9486]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:38:24 functional-525396 kubelet[9486]: E1208 00:38:24.567777    9486 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:38:24 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:38:24 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (373.484176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 00:40:34.379617  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:42:46.336480  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:44:09.403575  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:45:34.381533  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:47:46.335435  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:50:34.384230  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.56778849s)

                                                
                                                
-- stdout --
	* [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115059s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.569201681s for "functional-525396" cluster.
I1208 00:50:38.391423  791807 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (321.94988ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-714395 image ls --format yaml --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format json --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format table --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh     │ functional-714395 ssh pgrep buildkitd                                                                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image   │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                            │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls                                                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete  │ -p functional-714395                                                                                                                              │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start   │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start   │ -p functional-525396 --alsologtostderr -v=8                                                                                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:latest                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add minikube-local-cache-test:functional-525396                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache delete minikube-local-cache-test:functional-525396                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl images                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ cache   │ functional-525396 cache reload                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ kubectl │ functional-525396 kubectl -- --context functional-525396 get pods                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ start   │ -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:38:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:38:25.865142  832221 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:38:25.865266  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865270  832221 out.go:374] Setting ErrFile to fd 2...
	I1208 00:38:25.865273  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865522  832221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:38:25.865905  832221 out.go:368] Setting JSON to false
	I1208 00:38:25.866798  832221 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":19238,"bootTime":1765135068,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:38:25.866898  832221 start.go:143] virtualization:  
	I1208 00:38:25.870446  832221 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:38:25.873443  832221 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:38:25.873527  832221 notify.go:221] Checking for updates...
	I1208 00:38:25.877177  832221 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:38:25.880254  832221 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:38:25.883080  832221 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:38:25.885867  832221 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:38:25.888710  832221 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:38:25.892134  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:25.892227  832221 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:38:25.926814  832221 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:38:25.926949  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:25.982933  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:25.973301038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:25.983053  832221 docker.go:319] overlay module found
	I1208 00:38:25.986144  832221 out.go:179] * Using the docker driver based on existing profile
	I1208 00:38:25.988897  832221 start.go:309] selected driver: docker
	I1208 00:38:25.988906  832221 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:25.989004  832221 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:38:25.989104  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:26.085905  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:26.075169003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:26.086340  832221 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:38:26.086364  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:26.086419  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:26.086463  832221 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:26.089599  832221 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:38:26.092632  832221 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:38:26.095593  832221 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:38:26.098465  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:26.098511  832221 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:38:26.098512  832221 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:38:26.098520  832221 cache.go:65] Caching tarball of preloaded images
	I1208 00:38:26.098640  832221 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:38:26.098648  832221 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:38:26.098767  832221 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:38:26.118762  832221 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:38:26.118779  832221 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:38:26.118798  832221 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:38:26.118832  832221 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:38:26.118982  832221 start.go:364] duration metric: took 72.616µs to acquireMachinesLock for "functional-525396"
	I1208 00:38:26.119001  832221 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:38:26.119005  832221 fix.go:54] fixHost starting: 
	I1208 00:38:26.119276  832221 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:38:26.135702  832221 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:38:26.135737  832221 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:38:26.138942  832221 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:38:26.138968  832221 machine.go:94] provisionDockerMachine start ...
	I1208 00:38:26.139048  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.156040  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.156360  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.156366  832221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:38:26.306195  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.306209  832221 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:38:26.306278  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.323547  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.323853  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.323861  832221 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:38:26.483358  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.483423  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.500892  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.501201  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.501214  832221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:38:26.651219  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:38:26.651236  832221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:38:26.651262  832221 ubuntu.go:190] setting up certificates
	I1208 00:38:26.651269  832221 provision.go:84] configureAuth start
	I1208 00:38:26.651330  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:26.668935  832221 provision.go:143] copyHostCerts
	I1208 00:38:26.669007  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:38:26.669020  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:38:26.669092  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:38:26.669226  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:38:26.669232  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:38:26.669258  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:38:26.669316  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:38:26.669319  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:38:26.669351  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:38:26.669396  832221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:38:26.882878  832221 provision.go:177] copyRemoteCerts
	I1208 00:38:26.882932  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:38:26.882976  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.900195  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.008298  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:38:27.026654  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:38:27.044245  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:38:27.061828  832221 provision.go:87] duration metric: took 410.535167ms to configureAuth
	I1208 00:38:27.061847  832221 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:38:27.062049  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:27.062144  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.079069  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:27.079387  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:27.079399  832221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:38:27.403353  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:38:27.403368  832221 machine.go:97] duration metric: took 1.264393629s to provisionDockerMachine
	I1208 00:38:27.403378  832221 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:38:27.403389  832221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:38:27.403457  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:38:27.403520  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.422294  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.531362  832221 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:38:27.534870  832221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:38:27.534888  832221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:38:27.534898  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:38:27.534950  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:38:27.535028  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:38:27.535101  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:38:27.535142  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:38:27.543303  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:27.561264  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:38:27.579215  832221 start.go:296] duration metric: took 175.824145ms for postStartSetup
	I1208 00:38:27.579284  832221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:38:27.579329  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.597098  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.699502  832221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:38:27.703953  832221 fix.go:56] duration metric: took 1.584940995s for fixHost
	I1208 00:38:27.703967  832221 start.go:83] releasing machines lock for "functional-525396", held for 1.584978296s
	I1208 00:38:27.704034  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:27.720794  832221 ssh_runner.go:195] Run: cat /version.json
	I1208 00:38:27.720838  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.721083  832221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:38:27.721126  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.740766  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.744839  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.842382  832221 ssh_runner.go:195] Run: systemctl --version
	I1208 00:38:27.933498  832221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:38:27.969664  832221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:38:27.973926  832221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:38:27.973991  832221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:38:27.981670  832221 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:38:27.981684  832221 start.go:496] detecting cgroup driver to use...
	I1208 00:38:27.981714  832221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:38:27.981757  832221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:38:27.996930  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:38:28.011523  832221 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:38:28.011601  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:38:28.029696  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:38:28.043991  832221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:38:28.162184  832221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:38:28.302345  832221 docker.go:234] disabling docker service ...
	I1208 00:38:28.302409  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:38:28.316944  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:38:28.329323  832221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:38:28.471674  832221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:38:28.594617  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:38:28.607360  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:38:28.621958  832221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:38:28.622014  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.631486  832221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:38:28.631544  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.641093  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.650549  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.660155  832221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:38:28.667958  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.676952  832221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.685235  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.693630  832221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:38:28.701133  832221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:38:28.708624  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:28.814162  832221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:38:28.986282  832221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:38:28.986346  832221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:38:28.991517  832221 start.go:564] Will wait 60s for crictl version
	I1208 00:38:28.991573  832221 ssh_runner.go:195] Run: which crictl
	I1208 00:38:28.995534  832221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:38:29.025912  832221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:38:29.025997  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.062279  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.096298  832221 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:38:29.099065  832221 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:38:29.116028  832221 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:38:29.122672  832221 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:38:29.125488  832221 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:38:29.125636  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:29.125706  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.164815  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.164827  832221 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:38:29.164879  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.195499  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.195511  832221 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:38:29.195518  832221 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:38:29.195647  832221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:38:29.195726  832221 ssh_runner.go:195] Run: crio config
	I1208 00:38:29.250138  832221 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:38:29.250159  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:29.250168  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:29.250181  832221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:38:29.250206  832221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:38:29.250329  832221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:38:29.250397  832221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:38:29.258150  832221 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:38:29.258234  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:38:29.265694  832221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:38:29.278151  832221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:38:29.290865  832221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1208 00:38:29.303277  832221 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:38:29.306745  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:29.413867  832221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:38:29.757020  832221 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:38:29.757040  832221 certs.go:195] generating shared ca certs ...
	I1208 00:38:29.757055  832221 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:38:29.757227  832221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:38:29.757282  832221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:38:29.757288  832221 certs.go:257] generating profile certs ...
	I1208 00:38:29.757406  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:38:29.757463  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:38:29.757516  832221 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:38:29.757642  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:38:29.757680  832221 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:38:29.757687  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:38:29.757715  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:38:29.757753  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:38:29.757774  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:38:29.757826  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:29.761393  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:38:29.783882  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:38:29.803461  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:38:29.822714  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:38:29.839981  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:38:29.857351  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:38:29.874240  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:38:29.890650  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:38:29.906746  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:38:29.924059  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:38:29.940748  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:38:29.958110  832221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:38:29.970093  832221 ssh_runner.go:195] Run: openssl version
	I1208 00:38:29.976075  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.983124  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:38:29.990594  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994143  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994197  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:38:30.038336  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:38:30.048261  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.057929  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:38:30.067406  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072044  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072104  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.114205  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:38:30.122367  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.130206  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:38:30.138222  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142205  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142264  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.188681  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:38:30.197066  832221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:38:30.201256  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:38:30.247635  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:38:30.290467  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:38:30.332415  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:38:30.373141  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:38:30.413979  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:38:30.454763  832221 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:30.454864  832221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:38:30.454938  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.481225  832221 cri.go:89] found id: ""
	I1208 00:38:30.481285  832221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:38:30.488799  832221 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:38:30.488808  832221 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:38:30.488859  832221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:38:30.495821  832221 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.496331  832221 kubeconfig.go:125] found "functional-525396" server: "https://192.168.49.2:8441"
	I1208 00:38:30.497560  832221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:38:30.505232  832221 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:23:53.462513047 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:38:29.298599774 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:38:30.505258  832221 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:38:30.505269  832221 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 00:38:30.505341  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.544576  832221 cri.go:89] found id: ""
	I1208 00:38:30.544636  832221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:38:30.564190  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:38:30.571945  832221 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  8 00:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  8 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  8 00:28 /etc/kubernetes/scheduler.conf
	
	I1208 00:38:30.572003  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:38:30.579767  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:38:30.588961  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.589038  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:38:30.596275  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.604001  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.604058  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.611049  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:38:30.618317  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.618369  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:38:30.625673  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:38:30.633203  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:30.679020  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.303260  832221 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.624214812s)
	I1208 00:38:32.303321  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.499121  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.557405  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.605845  832221 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:38:32.605924  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.106778  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.606873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.106818  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.606134  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.106245  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.607017  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.106011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.606401  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.106569  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.606153  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.106367  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.605995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.106910  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.606698  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.606687  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.106589  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.606067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.106823  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.606794  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.106122  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.606931  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.106765  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.606092  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.107046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.606088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.106757  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.606004  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.106996  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.606590  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.106432  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.106745  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.606390  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.106196  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.606618  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.106064  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.606867  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.106995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.606766  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.106131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.606779  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.106290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.606219  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.106089  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.607007  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.106717  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.106475  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.607046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.106582  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.606125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.107067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.606667  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.106461  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.606353  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.106471  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.606654  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.107110  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.607006  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.106780  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.606382  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.106088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.606332  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.106060  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.106803  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.606107  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.106414  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.606178  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.106868  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.606030  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.106375  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.606102  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.107011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.606304  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.106096  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.606827  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.606893  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.107045  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.606816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.106126  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.606899  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.106572  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.606111  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.606103  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.106801  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.606703  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.106595  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.606139  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.106918  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.606350  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.106147  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.606821  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.106994  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.606129  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.106114  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.606499  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.106132  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.606921  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.106736  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.606121  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.106425  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.606155  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.106763  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.106058  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.606943  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.106991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.606966  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.106181  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.606342  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.106653  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.606117  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.106026  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.606138  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:32.606213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:32.631935  832221 cri.go:89] found id: ""
	I1208 00:39:32.631949  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.631956  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:32.631962  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:32.632027  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:32.657240  832221 cri.go:89] found id: ""
	I1208 00:39:32.657260  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.657267  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:32.657273  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:32.657332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:32.686247  832221 cri.go:89] found id: ""
	I1208 00:39:32.686261  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.686269  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:32.686274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:32.686334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:32.712330  832221 cri.go:89] found id: ""
	I1208 00:39:32.712345  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.712352  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:32.712358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:32.712416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:32.738663  832221 cri.go:89] found id: ""
	I1208 00:39:32.738678  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.738685  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:32.738690  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:32.738755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:32.765710  832221 cri.go:89] found id: ""
	I1208 00:39:32.765725  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.765731  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:32.765737  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:32.765792  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:32.791480  832221 cri.go:89] found id: ""
	I1208 00:39:32.791494  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.791501  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:32.791509  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:32.791520  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:32.856630  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:32.856654  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:32.873574  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:32.873591  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:32.937953  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:32.937966  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:32.937977  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:33.008749  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:33.008776  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.542093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:35.553517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:35.553575  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:35.584212  832221 cri.go:89] found id: ""
	I1208 00:39:35.584226  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.584233  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:35.584238  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:35.584296  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:35.615871  832221 cri.go:89] found id: ""
	I1208 00:39:35.615885  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.615892  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:35.615897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:35.615954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:35.641597  832221 cri.go:89] found id: ""
	I1208 00:39:35.641611  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.641618  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:35.641623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:35.641683  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:35.667538  832221 cri.go:89] found id: ""
	I1208 00:39:35.667551  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.667567  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:35.667572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:35.667633  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:35.696105  832221 cri.go:89] found id: ""
	I1208 00:39:35.696118  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.696124  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:35.696130  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:35.696187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:35.725150  832221 cri.go:89] found id: ""
	I1208 00:39:35.725165  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.725172  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:35.725178  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:35.725236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:35.752762  832221 cri.go:89] found id: ""
	I1208 00:39:35.752776  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.752783  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:35.752791  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:35.752801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.780454  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:35.780471  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:35.846096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:35.846118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:35.863081  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:35.863098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:35.932235  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:35.932246  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:35.932259  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.502146  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:38.514634  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:38.514691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:38.548208  832221 cri.go:89] found id: ""
	I1208 00:39:38.548223  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.548230  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:38.548235  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:38.548305  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:38.579066  832221 cri.go:89] found id: ""
	I1208 00:39:38.579080  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.579087  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:38.579092  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:38.579154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:38.605928  832221 cri.go:89] found id: ""
	I1208 00:39:38.605942  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.605949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:38.605954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:38.606013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:38.631317  832221 cri.go:89] found id: ""
	I1208 00:39:38.631332  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.631339  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:38.631350  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:38.631410  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:38.657581  832221 cri.go:89] found id: ""
	I1208 00:39:38.657595  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.657602  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:38.657607  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:38.657664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:38.688104  832221 cri.go:89] found id: ""
	I1208 00:39:38.688118  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.688125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:38.688131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:38.688191  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:38.712900  832221 cri.go:89] found id: ""
	I1208 00:39:38.712914  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.712921  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:38.712929  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:38.712939  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.782215  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:38.782236  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:38.813188  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:38.813203  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:38.882554  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:38.882574  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:38.899573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:38.899590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:38.963587  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.464816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:41.476933  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:41.476994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:41.519038  832221 cri.go:89] found id: ""
	I1208 00:39:41.519052  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.519059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:41.519065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:41.519120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:41.549931  832221 cri.go:89] found id: ""
	I1208 00:39:41.549946  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.549953  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:41.549958  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:41.550016  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:41.579952  832221 cri.go:89] found id: ""
	I1208 00:39:41.579966  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.579973  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:41.579978  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:41.580038  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:41.609851  832221 cri.go:89] found id: ""
	I1208 00:39:41.609865  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.609873  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:41.609878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:41.609940  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:41.635896  832221 cri.go:89] found id: ""
	I1208 00:39:41.635910  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.635917  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:41.635923  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:41.635986  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:41.662056  832221 cri.go:89] found id: ""
	I1208 00:39:41.662083  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.662091  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:41.662097  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:41.662170  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:41.687327  832221 cri.go:89] found id: ""
	I1208 00:39:41.687342  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.687349  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:41.687357  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:41.687367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:41.753129  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:41.753148  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:41.769911  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:41.769927  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:41.838088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.838099  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:41.838111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:41.910629  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:41.910651  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:44.440476  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:44.450677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:44.450737  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:44.477661  832221 cri.go:89] found id: ""
	I1208 00:39:44.477674  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.477681  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:44.477687  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:44.477754  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:44.502810  832221 cri.go:89] found id: ""
	I1208 00:39:44.502824  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.502831  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:44.502836  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:44.502922  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:44.536158  832221 cri.go:89] found id: ""
	I1208 00:39:44.536171  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.536178  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:44.536187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:44.536245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:44.569819  832221 cri.go:89] found id: ""
	I1208 00:39:44.569832  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.569839  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:44.569844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:44.569900  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:44.596822  832221 cri.go:89] found id: ""
	I1208 00:39:44.596837  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.596844  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:44.596849  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:44.596909  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:44.626118  832221 cri.go:89] found id: ""
	I1208 00:39:44.626132  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.626139  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:44.626159  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:44.626220  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:44.651327  832221 cri.go:89] found id: ""
	I1208 00:39:44.651341  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.651348  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:44.651356  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:44.651366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:44.717153  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:44.717174  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:44.734169  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:44.734200  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:44.800240  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:44.800252  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:44.800263  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:44.873699  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:44.873729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.404232  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:47.415493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:47.415558  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:47.442934  832221 cri.go:89] found id: ""
	I1208 00:39:47.442948  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.442955  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:47.442961  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:47.443025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:47.468072  832221 cri.go:89] found id: ""
	I1208 00:39:47.468086  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.468093  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:47.468099  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:47.468169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:47.499439  832221 cri.go:89] found id: ""
	I1208 00:39:47.499452  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.499460  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:47.499465  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:47.499522  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:47.525160  832221 cri.go:89] found id: ""
	I1208 00:39:47.525173  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.525180  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:47.525186  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:47.525261  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:47.557881  832221 cri.go:89] found id: ""
	I1208 00:39:47.557902  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.557909  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:47.557915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:47.557973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:47.585993  832221 cri.go:89] found id: ""
	I1208 00:39:47.586006  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.586013  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:47.586018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:47.586074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:47.611544  832221 cri.go:89] found id: ""
	I1208 00:39:47.611559  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.611565  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:47.611573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:47.611594  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:47.673948  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:47.673960  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:47.673971  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:47.746050  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:47.746071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.778206  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:47.778228  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:47.843769  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:47.843788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.361131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:50.373118  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:50.373178  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:50.402177  832221 cri.go:89] found id: ""
	I1208 00:39:50.402192  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.402199  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:50.402204  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:50.402262  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:50.428277  832221 cri.go:89] found id: ""
	I1208 00:39:50.428291  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.428298  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:50.428303  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:50.428361  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:50.453780  832221 cri.go:89] found id: ""
	I1208 00:39:50.453793  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.453801  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:50.453806  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:50.453867  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:50.478816  832221 cri.go:89] found id: ""
	I1208 00:39:50.478830  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.478838  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:50.478887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:50.478952  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:50.506494  832221 cri.go:89] found id: ""
	I1208 00:39:50.506508  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.506516  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:50.506523  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:50.506581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:50.548254  832221 cri.go:89] found id: ""
	I1208 00:39:50.548267  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.548275  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:50.548289  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:50.548345  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:50.580999  832221 cri.go:89] found id: ""
	I1208 00:39:50.581013  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.581020  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:50.581028  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:50.581038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:50.646872  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:50.646894  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.663705  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:50.663722  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:50.731208  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:50.731220  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:50.731231  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:50.800530  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:50.800552  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:53.328838  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:53.338798  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:53.338876  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:53.364078  832221 cri.go:89] found id: ""
	I1208 00:39:53.364093  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.364100  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:53.364106  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:53.364165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:53.389870  832221 cri.go:89] found id: ""
	I1208 00:39:53.389884  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.389891  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:53.389897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:53.389955  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:53.415578  832221 cri.go:89] found id: ""
	I1208 00:39:53.415592  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.415600  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:53.415606  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:53.415664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:53.440749  832221 cri.go:89] found id: ""
	I1208 00:39:53.440763  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.440769  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:53.440775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:53.440837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:53.469528  832221 cri.go:89] found id: ""
	I1208 00:39:53.469542  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.469550  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:53.469555  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:53.469614  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:53.494205  832221 cri.go:89] found id: ""
	I1208 00:39:53.494219  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.494225  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:53.494231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:53.494286  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:53.536734  832221 cri.go:89] found id: ""
	I1208 00:39:53.536748  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.536755  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:53.536763  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:53.536773  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:53.608590  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:53.608610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:53.625117  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:53.625134  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:53.687237  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:53.687248  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:53.687258  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:53.755459  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:53.755480  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.290756  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:56.302211  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:56.302272  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:56.327085  832221 cri.go:89] found id: ""
	I1208 00:39:56.327098  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.327105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:56.327110  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:56.327165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:56.351553  832221 cri.go:89] found id: ""
	I1208 00:39:56.351567  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.351574  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:56.351579  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:56.351636  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:56.375432  832221 cri.go:89] found id: ""
	I1208 00:39:56.375445  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.375451  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:56.375456  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:56.375513  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:56.399254  832221 cri.go:89] found id: ""
	I1208 00:39:56.399267  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.399274  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:56.399282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:56.399337  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:56.424239  832221 cri.go:89] found id: ""
	I1208 00:39:56.424253  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.424260  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:56.424265  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:56.424322  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:56.447970  832221 cri.go:89] found id: ""
	I1208 00:39:56.447983  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.447990  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:56.447996  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:56.448059  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:56.480639  832221 cri.go:89] found id: ""
	I1208 00:39:56.480652  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.480659  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:56.480666  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:56.480680  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.514333  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:56.514349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:56.587248  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:56.587268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:56.604138  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:56.604156  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:56.667583  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:56.667593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:56.667605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.236478  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:59.246590  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:59.246653  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:59.274726  832221 cri.go:89] found id: ""
	I1208 00:39:59.274739  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.274746  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:59.274752  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:59.274816  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:59.302946  832221 cri.go:89] found id: ""
	I1208 00:39:59.302960  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.302967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:59.302972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:59.303036  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:59.328486  832221 cri.go:89] found id: ""
	I1208 00:39:59.328510  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.328517  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:59.328522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:59.328583  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:59.354620  832221 cri.go:89] found id: ""
	I1208 00:39:59.354638  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.354645  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:59.354651  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:59.354722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:59.379131  832221 cri.go:89] found id: ""
	I1208 00:39:59.379145  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.379152  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:59.379157  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:59.379221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:59.407900  832221 cri.go:89] found id: ""
	I1208 00:39:59.407915  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.407921  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:59.407930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:59.407999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:59.432790  832221 cri.go:89] found id: ""
	I1208 00:39:59.432804  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.432811  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:59.432819  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:59.432829  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:59.498500  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:59.498521  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:59.517843  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:59.517860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:59.592346  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:59.592356  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:59.592366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.660798  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:59.660821  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.193318  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:02.204389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:02.204452  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:02.233248  832221 cri.go:89] found id: ""
	I1208 00:40:02.233262  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.233272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:02.233277  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:02.233338  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:02.259542  832221 cri.go:89] found id: ""
	I1208 00:40:02.259555  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.259562  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:02.259567  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:02.259626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:02.284406  832221 cri.go:89] found id: ""
	I1208 00:40:02.284421  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.284428  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:02.284433  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:02.284492  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:02.314792  832221 cri.go:89] found id: ""
	I1208 00:40:02.314807  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.314815  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:02.314820  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:02.314902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:02.345720  832221 cri.go:89] found id: ""
	I1208 00:40:02.345735  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.345742  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:02.345748  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:02.345806  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:02.374260  832221 cri.go:89] found id: ""
	I1208 00:40:02.374275  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.374282  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:02.374288  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:02.374356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:02.401424  832221 cri.go:89] found id: ""
	I1208 00:40:02.401448  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.401456  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:02.401464  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:02.401477  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:02.418749  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:02.418772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:02.488580  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:02.488593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:02.488605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:02.561942  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:02.561963  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.594984  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:02.595001  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.164061  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:05.174102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:05.174162  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:05.200676  832221 cri.go:89] found id: ""
	I1208 00:40:05.200690  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.200697  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:05.200702  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:05.200762  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:05.229843  832221 cri.go:89] found id: ""
	I1208 00:40:05.229857  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.229864  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:05.229869  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:05.229923  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:05.254905  832221 cri.go:89] found id: ""
	I1208 00:40:05.254919  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.254926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:05.254930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:05.254989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:05.284106  832221 cri.go:89] found id: ""
	I1208 00:40:05.284120  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.284127  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:05.284132  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:05.284197  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:05.308626  832221 cri.go:89] found id: ""
	I1208 00:40:05.308640  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.308647  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:05.308652  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:05.308714  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:05.337161  832221 cri.go:89] found id: ""
	I1208 00:40:05.337175  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.337182  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:05.337187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:05.337268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:05.362077  832221 cri.go:89] found id: ""
	I1208 00:40:05.362091  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.362098  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:05.362105  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:05.362116  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.428096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:05.428115  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:05.445139  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:05.445161  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:05.507290  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:05.507310  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:05.507321  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:05.586340  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:05.586361  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.118998  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:08.129512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:08.129588  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:08.156251  832221 cri.go:89] found id: ""
	I1208 00:40:08.156265  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.156272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:08.156278  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:08.156344  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:08.183906  832221 cri.go:89] found id: ""
	I1208 00:40:08.183919  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.183926  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:08.183931  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:08.183987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:08.210358  832221 cri.go:89] found id: ""
	I1208 00:40:08.210372  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.210379  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:08.210384  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:08.210442  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:08.235462  832221 cri.go:89] found id: ""
	I1208 00:40:08.235476  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.235483  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:08.235489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:08.235544  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:08.261687  832221 cri.go:89] found id: ""
	I1208 00:40:08.261700  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.261707  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:08.261713  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:08.261771  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:08.285826  832221 cri.go:89] found id: ""
	I1208 00:40:08.285842  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.285849  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:08.285854  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:08.285912  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:08.312132  832221 cri.go:89] found id: ""
	I1208 00:40:08.312146  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.312153  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:08.312161  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:08.312171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:08.380160  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:08.380177  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:08.380187  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:08.455282  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:08.455305  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.490186  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:08.490207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:08.563751  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:08.563779  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.082398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:11.092581  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:11.092642  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:11.118553  832221 cri.go:89] found id: ""
	I1208 00:40:11.118568  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.118575  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:11.118580  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:11.118638  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:11.144055  832221 cri.go:89] found id: ""
	I1208 00:40:11.144070  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.144077  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:11.144082  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:11.144144  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:11.169906  832221 cri.go:89] found id: ""
	I1208 00:40:11.169919  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.169926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:11.169931  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:11.169988  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:11.197596  832221 cri.go:89] found id: ""
	I1208 00:40:11.197610  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.197617  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:11.197623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:11.197681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:11.223606  832221 cri.go:89] found id: ""
	I1208 00:40:11.223624  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.223631  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:11.223636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:11.223693  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:11.248818  832221 cri.go:89] found id: ""
	I1208 00:40:11.248832  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.248838  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:11.248844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:11.248902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:11.273540  832221 cri.go:89] found id: ""
	I1208 00:40:11.273554  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.273561  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:11.273568  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:11.273579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:11.338706  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:11.338726  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.357554  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:11.357571  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:11.420756  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:11.420767  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:11.420788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:11.489139  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:11.489157  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.024714  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:14.035808  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:14.035873  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:14.061793  832221 cri.go:89] found id: ""
	I1208 00:40:14.061807  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.061814  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:14.061819  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:14.061875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:14.090633  832221 cri.go:89] found id: ""
	I1208 00:40:14.090647  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.090654  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:14.090661  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:14.090719  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:14.115546  832221 cri.go:89] found id: ""
	I1208 00:40:14.115560  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.115567  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:14.115572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:14.115629  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:14.141065  832221 cri.go:89] found id: ""
	I1208 00:40:14.141079  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.141086  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:14.141091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:14.141154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:14.165799  832221 cri.go:89] found id: ""
	I1208 00:40:14.165814  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.165821  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:14.165826  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:14.165886  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:14.195480  832221 cri.go:89] found id: ""
	I1208 00:40:14.195494  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.195501  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:14.195506  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:14.195564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:14.220362  832221 cri.go:89] found id: ""
	I1208 00:40:14.220377  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.220384  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:14.220392  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:14.220405  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:14.287292  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:14.287303  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:14.287313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:14.356018  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:14.356038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.387237  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:14.387253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:14.454492  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:14.454512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:16.972125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:16.982309  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:16.982372  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:17.017693  832221 cri.go:89] found id: ""
	I1208 00:40:17.017706  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.017714  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:17.017719  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:17.017778  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:17.044376  832221 cri.go:89] found id: ""
	I1208 00:40:17.044391  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.044399  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:17.044404  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:17.044473  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:17.070587  832221 cri.go:89] found id: ""
	I1208 00:40:17.070601  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.070608  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:17.070613  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:17.070672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:17.095978  832221 cri.go:89] found id: ""
	I1208 00:40:17.095992  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.095999  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:17.096004  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:17.096062  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:17.122135  832221 cri.go:89] found id: ""
	I1208 00:40:17.122149  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.122156  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:17.122161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:17.122221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:17.148103  832221 cri.go:89] found id: ""
	I1208 00:40:17.148118  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.148125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:17.148131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:17.148192  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:17.172943  832221 cri.go:89] found id: ""
	I1208 00:40:17.172957  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.172964  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:17.172971  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:17.172982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:17.238368  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:17.238387  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:17.255667  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:17.255685  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:17.321644  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:17.321656  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:17.321667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:17.394476  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:17.394498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:19.927345  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:19.939629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:19.939691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:19.965406  832221 cri.go:89] found id: ""
	I1208 00:40:19.965420  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.965427  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:19.965432  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:19.965500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:19.992009  832221 cri.go:89] found id: ""
	I1208 00:40:19.992023  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.992030  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:19.992035  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:19.992098  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:20.029302  832221 cri.go:89] found id: ""
	I1208 00:40:20.029317  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.029324  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:20.029330  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:20.029399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:20.058056  832221 cri.go:89] found id: ""
	I1208 00:40:20.058071  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.058085  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:20.058091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:20.058165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:20.084189  832221 cri.go:89] found id: ""
	I1208 00:40:20.084203  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.084211  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:20.084216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:20.084291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:20.111361  832221 cri.go:89] found id: ""
	I1208 00:40:20.111376  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.111383  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:20.111389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:20.111449  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:20.141805  832221 cri.go:89] found id: ""
	I1208 00:40:20.141819  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.141826  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:20.141834  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:20.141844  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:20.169490  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:20.169506  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:20.234965  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:20.234985  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:20.252060  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:20.252078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:20.320257  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:20.320267  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:20.320280  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:22.888858  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:22.899382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:22.899447  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:22.924604  832221 cri.go:89] found id: ""
	I1208 00:40:22.924619  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.924625  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:22.924631  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:22.924698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:22.955239  832221 cri.go:89] found id: ""
	I1208 00:40:22.955253  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.955259  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:22.955264  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:22.955323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:22.981222  832221 cri.go:89] found id: ""
	I1208 00:40:22.981237  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.981244  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:22.981250  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:22.981317  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:23.011070  832221 cri.go:89] found id: ""
	I1208 00:40:23.011085  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.011092  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:23.011098  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:23.011169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:23.038240  832221 cri.go:89] found id: ""
	I1208 00:40:23.038255  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.038263  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:23.038268  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:23.038329  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:23.068452  832221 cri.go:89] found id: ""
	I1208 00:40:23.068466  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.068473  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:23.068479  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:23.068536  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:23.094006  832221 cri.go:89] found id: ""
	I1208 00:40:23.094020  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.094027  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:23.094035  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:23.094047  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:23.160498  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:23.160517  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:23.177630  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:23.177647  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:23.241245  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:23.241256  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:23.241268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:23.310140  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:23.310159  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:25.838645  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:25.849038  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:25.849104  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:25.876484  832221 cri.go:89] found id: ""
	I1208 00:40:25.876499  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.876506  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:25.876512  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:25.876574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:25.906565  832221 cri.go:89] found id: ""
	I1208 00:40:25.906579  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.906587  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:25.906592  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:25.906649  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:25.937448  832221 cri.go:89] found id: ""
	I1208 00:40:25.937463  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.937471  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:25.937476  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:25.937537  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:25.966528  832221 cri.go:89] found id: ""
	I1208 00:40:25.966542  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.966549  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:25.966554  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:25.966609  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:25.993465  832221 cri.go:89] found id: ""
	I1208 00:40:25.993480  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.993487  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:25.993493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:25.993554  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:26.022155  832221 cri.go:89] found id: ""
	I1208 00:40:26.022168  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.022175  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:26.022181  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:26.022239  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:26.049049  832221 cri.go:89] found id: ""
	I1208 00:40:26.049064  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.049072  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:26.049087  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:26.049098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:26.119386  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:26.119406  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:26.155712  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:26.155729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:26.223788  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:26.223809  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:26.245587  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:26.245610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:26.309129  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:28.809355  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:28.819547  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:28.819610  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:28.849672  832221 cri.go:89] found id: ""
	I1208 00:40:28.849687  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.849694  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:28.849700  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:28.849760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:28.880748  832221 cri.go:89] found id: ""
	I1208 00:40:28.880763  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.880769  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:28.880774  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:28.880837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:28.908198  832221 cri.go:89] found id: ""
	I1208 00:40:28.908212  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.908219  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:28.908224  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:28.908282  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:28.933130  832221 cri.go:89] found id: ""
	I1208 00:40:28.933144  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.933151  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:28.933156  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:28.933222  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:28.964126  832221 cri.go:89] found id: ""
	I1208 00:40:28.964140  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.964147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:28.964153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:28.964210  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:28.990484  832221 cri.go:89] found id: ""
	I1208 00:40:28.990499  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.990506  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:28.990512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:28.990573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:29.017806  832221 cri.go:89] found id: ""
	I1208 00:40:29.017820  832221 logs.go:282] 0 containers: []
	W1208 00:40:29.017828  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:29.017835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:29.017847  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:29.084613  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:29.084635  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:29.101973  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:29.101992  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:29.173921  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:29.173933  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:29.173944  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:29.240893  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:29.240915  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:31.777057  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:31.790721  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:31.790788  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:31.822768  832221 cri.go:89] found id: ""
	I1208 00:40:31.822783  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.822790  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:31.822795  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:31.822969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:31.848644  832221 cri.go:89] found id: ""
	I1208 00:40:31.848657  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.848672  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:31.848678  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:31.848745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:31.874088  832221 cri.go:89] found id: ""
	I1208 00:40:31.874101  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.874117  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:31.874123  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:31.874179  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:31.899211  832221 cri.go:89] found id: ""
	I1208 00:40:31.899234  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.899242  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:31.899247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:31.899316  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:31.924268  832221 cri.go:89] found id: ""
	I1208 00:40:31.924282  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.924290  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:31.924295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:31.924355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:31.950349  832221 cri.go:89] found id: ""
	I1208 00:40:31.950363  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.950370  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:31.950376  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:31.950433  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:31.979825  832221 cri.go:89] found id: ""
	I1208 00:40:31.979848  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.979856  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:31.979864  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:31.979875  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:32.045728  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:32.045748  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:32.062977  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:32.062995  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:32.127567  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:32.127579  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:32.127590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:32.195761  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:32.195782  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:34.725887  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:34.742661  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:34.742722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:34.778651  832221 cri.go:89] found id: ""
	I1208 00:40:34.778665  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.778672  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:34.778678  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:34.778736  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:34.811974  832221 cri.go:89] found id: ""
	I1208 00:40:34.811988  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.811995  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:34.812000  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:34.812057  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:34.844697  832221 cri.go:89] found id: ""
	I1208 00:40:34.844712  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.844719  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:34.844725  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:34.844782  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:34.872482  832221 cri.go:89] found id: ""
	I1208 00:40:34.872495  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.872502  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:34.872509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:34.872564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:34.898220  832221 cri.go:89] found id: ""
	I1208 00:40:34.898235  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.898242  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:34.898247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:34.898308  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:34.925442  832221 cri.go:89] found id: ""
	I1208 00:40:34.925457  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.925464  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:34.925470  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:34.925527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:34.952326  832221 cri.go:89] found id: ""
	I1208 00:40:34.952340  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.952347  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:34.952355  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:34.952367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:35.018286  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:35.018308  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:35.036568  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:35.036588  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:35.105378  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:35.105389  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:35.105403  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:35.175887  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:35.175909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:37.712873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:37.722837  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:37.722915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:37.748671  832221 cri.go:89] found id: ""
	I1208 00:40:37.748684  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.748691  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:37.748697  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:37.748760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:37.787454  832221 cri.go:89] found id: ""
	I1208 00:40:37.787467  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.787475  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:37.787479  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:37.787540  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:37.827928  832221 cri.go:89] found id: ""
	I1208 00:40:37.827942  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.827949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:37.827954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:37.828015  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:37.853248  832221 cri.go:89] found id: ""
	I1208 00:40:37.853261  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.853268  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:37.853274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:37.853333  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:37.881771  832221 cri.go:89] found id: ""
	I1208 00:40:37.881785  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.881792  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:37.881797  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:37.881862  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:37.908845  832221 cri.go:89] found id: ""
	I1208 00:40:37.908858  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.908864  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:37.908870  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:37.908927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:37.933663  832221 cri.go:89] found id: ""
	I1208 00:40:37.933676  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.933684  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:37.933691  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:37.933702  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:37.950237  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:37.950253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:38.015251  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:38.015261  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:38.015272  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:38.086877  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:38.086899  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:38.120835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:38.120851  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:40.690876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:40.701698  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:40.701757  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:40.728919  832221 cri.go:89] found id: ""
	I1208 00:40:40.728933  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.728944  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:40.728950  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:40.729006  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:40.756412  832221 cri.go:89] found id: ""
	I1208 00:40:40.756426  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.756433  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:40.756438  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:40.756496  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:40.785209  832221 cri.go:89] found id: ""
	I1208 00:40:40.785223  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.785230  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:40.785235  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:40.785293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:40.812803  832221 cri.go:89] found id: ""
	I1208 00:40:40.812816  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.812823  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:40.812828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:40.812884  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:40.841663  832221 cri.go:89] found id: ""
	I1208 00:40:40.841676  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.841683  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:40.841688  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:40.841745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:40.867267  832221 cri.go:89] found id: ""
	I1208 00:40:40.867281  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.867298  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:40.867304  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:40.867365  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:40.896639  832221 cri.go:89] found id: ""
	I1208 00:40:40.896652  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.896661  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:40.896668  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:40.896678  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:40.960376  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:40.960386  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:40.960397  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:41.032818  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:41.032839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.062752  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:41.062771  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:41.130656  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:41.130676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.649290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:43.659339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:43.659404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:43.685304  832221 cri.go:89] found id: ""
	I1208 00:40:43.685319  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.685326  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:43.685332  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:43.685394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:43.710805  832221 cri.go:89] found id: ""
	I1208 00:40:43.710820  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.710827  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:43.710856  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:43.710933  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:43.735910  832221 cri.go:89] found id: ""
	I1208 00:40:43.735923  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.735930  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:43.735936  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:43.735994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:43.776908  832221 cri.go:89] found id: ""
	I1208 00:40:43.776921  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.776928  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:43.776934  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:43.776997  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:43.809711  832221 cri.go:89] found id: ""
	I1208 00:40:43.809724  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.809731  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:43.809736  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:43.809794  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:43.838996  832221 cri.go:89] found id: ""
	I1208 00:40:43.839009  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.839016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:43.839022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:43.839087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:43.864075  832221 cri.go:89] found id: ""
	I1208 00:40:43.864088  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.864095  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:43.864103  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:43.864120  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:43.930430  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:43.930449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.948281  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:43.948301  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:44.016438  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:44.016448  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:44.016462  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:44.087788  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:44.087808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.619014  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:46.629647  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:46.629711  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:46.655337  832221 cri.go:89] found id: ""
	I1208 00:40:46.655352  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.655360  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:46.655365  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:46.655426  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:46.685122  832221 cri.go:89] found id: ""
	I1208 00:40:46.685137  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.685145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:46.685150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:46.685218  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:46.711647  832221 cri.go:89] found id: ""
	I1208 00:40:46.711661  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.711669  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:46.711674  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:46.711739  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:46.739056  832221 cri.go:89] found id: ""
	I1208 00:40:46.739070  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.739077  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:46.739082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:46.739138  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:46.777014  832221 cri.go:89] found id: ""
	I1208 00:40:46.777040  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.777047  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:46.777053  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:46.777120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:46.821392  832221 cri.go:89] found id: ""
	I1208 00:40:46.821407  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.821414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:46.821419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:46.821481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:46.847683  832221 cri.go:89] found id: ""
	I1208 00:40:46.847706  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.847714  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:46.847722  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:46.847735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.880771  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:46.880787  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:46.946188  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:46.946208  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:46.965130  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:46.965147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:47.035809  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:47.035820  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:47.035843  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.603876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:49.614271  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:49.614332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:49.640814  832221 cri.go:89] found id: ""
	I1208 00:40:49.640827  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.640834  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:49.640840  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:49.640898  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:49.670323  832221 cri.go:89] found id: ""
	I1208 00:40:49.670337  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.670345  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:49.670351  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:49.670409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:49.696270  832221 cri.go:89] found id: ""
	I1208 00:40:49.696284  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.696290  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:49.696295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:49.696353  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:49.725434  832221 cri.go:89] found id: ""
	I1208 00:40:49.725448  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.725454  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:49.725468  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:49.725525  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:49.760362  832221 cri.go:89] found id: ""
	I1208 00:40:49.760375  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.760382  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:49.760393  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:49.760450  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:49.789531  832221 cri.go:89] found id: ""
	I1208 00:40:49.789545  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.789552  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:49.789567  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:49.789637  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:49.818353  832221 cri.go:89] found id: ""
	I1208 00:40:49.818367  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.818374  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:49.818390  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:49.818401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.890934  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:49.890956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:49.919198  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:49.919214  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:49.988173  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:49.988194  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:50.007229  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:50.007249  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:50.081725  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.581991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:52.592775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:52.592847  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:52.619761  832221 cri.go:89] found id: ""
	I1208 00:40:52.619775  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.619782  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:52.619788  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:52.619853  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:52.647647  832221 cri.go:89] found id: ""
	I1208 00:40:52.647662  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.647669  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:52.647674  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:52.647761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:52.673131  832221 cri.go:89] found id: ""
	I1208 00:40:52.673145  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.673152  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:52.673161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:52.673228  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:52.699525  832221 cri.go:89] found id: ""
	I1208 00:40:52.699540  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.699547  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:52.699553  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:52.699620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:52.725467  832221 cri.go:89] found id: ""
	I1208 00:40:52.725482  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.725489  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:52.725494  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:52.725556  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:52.756767  832221 cri.go:89] found id: ""
	I1208 00:40:52.756782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.756790  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:52.756796  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:52.756855  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:52.787768  832221 cri.go:89] found id: ""
	I1208 00:40:52.787782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.787790  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:52.787797  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:52.787808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:52.817811  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:52.817827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:52.889380  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:52.889401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:52.906939  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:52.906956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:52.971866  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.971876  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:52.971889  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.544702  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:55.554800  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:55.554875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:55.581294  832221 cri.go:89] found id: ""
	I1208 00:40:55.581309  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.581316  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:55.581321  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:55.581384  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:55.609189  832221 cri.go:89] found id: ""
	I1208 00:40:55.609210  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.609217  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:55.609222  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:55.609281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:55.636121  832221 cri.go:89] found id: ""
	I1208 00:40:55.636135  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.636142  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:55.636147  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:55.636212  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:55.661670  832221 cri.go:89] found id: ""
	I1208 00:40:55.661684  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.661691  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:55.661697  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:55.661756  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:55.687332  832221 cri.go:89] found id: ""
	I1208 00:40:55.687345  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.687352  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:55.687358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:55.687416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:55.713054  832221 cri.go:89] found id: ""
	I1208 00:40:55.713069  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.713076  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:55.713082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:55.713140  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:55.742979  832221 cri.go:89] found id: ""
	I1208 00:40:55.742993  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.743000  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:55.743008  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:55.743019  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:55.761280  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:55.761297  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:55.838925  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:55.838936  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:55.838949  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.910195  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:55.910218  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:55.940346  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:55.940364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.509357  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:58.519836  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:58.519901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:58.545859  832221 cri.go:89] found id: ""
	I1208 00:40:58.545874  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.545881  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:58.545887  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:58.545948  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:58.575589  832221 cri.go:89] found id: ""
	I1208 00:40:58.575603  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.575609  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:58.575614  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:58.575672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:58.604890  832221 cri.go:89] found id: ""
	I1208 00:40:58.604905  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.604911  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:58.604917  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:58.604974  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:58.630992  832221 cri.go:89] found id: ""
	I1208 00:40:58.631006  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.631013  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:58.631018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:58.631075  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:58.656862  832221 cri.go:89] found id: ""
	I1208 00:40:58.656875  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.656882  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:58.656887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:58.656950  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:58.693729  832221 cri.go:89] found id: ""
	I1208 00:40:58.693744  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.693751  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:58.693756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:58.693815  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:58.719999  832221 cri.go:89] found id: ""
	I1208 00:40:58.720014  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.720021  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:58.720029  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:58.720040  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.787457  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:58.787475  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:58.809951  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:58.809970  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:58.877531  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:58.877584  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:58.877595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:58.944804  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:58.944823  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:01.474302  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:01.485101  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:01.485163  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:01.512067  832221 cri.go:89] found id: ""
	I1208 00:41:01.512081  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.512094  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:01.512100  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:01.512173  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:01.538625  832221 cri.go:89] found id: ""
	I1208 00:41:01.538639  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.538646  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:01.538651  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:01.538712  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:01.564246  832221 cri.go:89] found id: ""
	I1208 00:41:01.564260  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.564268  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:01.564273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:01.564341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:01.590766  832221 cri.go:89] found id: ""
	I1208 00:41:01.590780  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.590787  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:01.590793  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:01.590880  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:01.618080  832221 cri.go:89] found id: ""
	I1208 00:41:01.618095  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.618102  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:01.618107  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:01.618166  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:01.644849  832221 cri.go:89] found id: ""
	I1208 00:41:01.644864  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.644872  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:01.644878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:01.644943  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:01.670907  832221 cri.go:89] found id: ""
	I1208 00:41:01.670927  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.670945  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:01.670953  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:01.670972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:01.737140  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:01.737160  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:01.756176  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:01.756199  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:01.837855  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:01.837866  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:01.837880  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:01.907644  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:01.907665  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:04.439011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:04.449676  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:04.449738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:04.475094  832221 cri.go:89] found id: ""
	I1208 00:41:04.475107  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.475116  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:04.475122  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:04.475180  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:04.499488  832221 cri.go:89] found id: ""
	I1208 00:41:04.499502  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.499509  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:04.499514  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:04.499574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:04.524302  832221 cri.go:89] found id: ""
	I1208 00:41:04.524315  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.524322  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:04.524328  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:04.524399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:04.550178  832221 cri.go:89] found id: ""
	I1208 00:41:04.550192  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.550207  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:04.550214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:04.550290  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:04.579863  832221 cri.go:89] found id: ""
	I1208 00:41:04.579876  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.579883  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:04.579888  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:04.579947  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:04.612186  832221 cri.go:89] found id: ""
	I1208 00:41:04.612200  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.612207  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:04.612212  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:04.612268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:04.638270  832221 cri.go:89] found id: ""
	I1208 00:41:04.638291  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.638298  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:04.638305  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:04.638316  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:04.704479  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:04.704498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:04.721141  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:04.721158  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:04.791977  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:04.791987  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:04.792009  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:04.869143  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:04.869164  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:07.399175  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:07.409630  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:07.409692  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:07.436029  832221 cri.go:89] found id: ""
	I1208 00:41:07.436051  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.436059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:07.436065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:07.436133  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:07.462353  832221 cri.go:89] found id: ""
	I1208 00:41:07.462367  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.462374  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:07.462379  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:07.462438  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:07.488128  832221 cri.go:89] found id: ""
	I1208 00:41:07.488142  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.488149  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:07.488154  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:07.488217  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:07.516680  832221 cri.go:89] found id: ""
	I1208 00:41:07.516694  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.516700  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:07.516705  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:07.516761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:07.541724  832221 cri.go:89] found id: ""
	I1208 00:41:07.541738  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.541747  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:07.541752  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:07.541809  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:07.566019  832221 cri.go:89] found id: ""
	I1208 00:41:07.566033  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.566049  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:07.566055  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:07.566120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:07.590763  832221 cri.go:89] found id: ""
	I1208 00:41:07.590786  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.590793  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:07.590800  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:07.590811  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:07.655603  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:07.655627  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:07.672718  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:07.672735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:07.739768  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:07.739777  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:07.739788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:07.818332  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:07.818351  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:10.352542  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:10.362750  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:10.362807  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:10.387611  832221 cri.go:89] found id: ""
	I1208 00:41:10.387625  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.387631  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:10.387637  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:10.387702  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:10.416324  832221 cri.go:89] found id: ""
	I1208 00:41:10.416338  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.416344  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:10.416349  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:10.416407  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:10.441107  832221 cri.go:89] found id: ""
	I1208 00:41:10.441121  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.441128  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:10.441133  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:10.441199  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:10.469633  832221 cri.go:89] found id: ""
	I1208 00:41:10.469646  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.469659  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:10.469664  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:10.469723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:10.494876  832221 cri.go:89] found id: ""
	I1208 00:41:10.494890  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.494896  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:10.494902  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:10.494960  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:10.531392  832221 cri.go:89] found id: ""
	I1208 00:41:10.531407  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.531414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:10.531419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:10.531488  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:10.564042  832221 cri.go:89] found id: ""
	I1208 00:41:10.564056  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.564063  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:10.564072  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:10.564082  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:10.630069  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:10.630089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:10.647244  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:10.647260  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:10.722704  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:10.722715  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:10.722727  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:10.795845  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:10.795865  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.326398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:13.336729  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:13.336789  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:13.362204  832221 cri.go:89] found id: ""
	I1208 00:41:13.362218  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.362225  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:13.362231  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:13.362288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:13.387741  832221 cri.go:89] found id: ""
	I1208 00:41:13.387755  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.387762  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:13.387767  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:13.387825  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:13.416495  832221 cri.go:89] found id: ""
	I1208 00:41:13.416508  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.416515  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:13.416520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:13.416580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:13.442986  832221 cri.go:89] found id: ""
	I1208 00:41:13.443000  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.443008  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:13.443015  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:13.443074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:13.468540  832221 cri.go:89] found id: ""
	I1208 00:41:13.468555  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.468562  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:13.468568  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:13.468626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:13.494472  832221 cri.go:89] found id: ""
	I1208 00:41:13.494487  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.494494  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:13.494500  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:13.494561  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:13.521305  832221 cri.go:89] found id: ""
	I1208 00:41:13.521318  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.521325  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:13.521333  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:13.521347  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.553343  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:13.553359  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:13.621324  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:13.621342  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:13.638433  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:13.638450  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:13.707199  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:13.707209  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:13.707232  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.276942  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:16.286989  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:16.287051  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:16.312004  832221 cri.go:89] found id: ""
	I1208 00:41:16.312018  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.312025  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:16.312031  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:16.312090  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:16.336677  832221 cri.go:89] found id: ""
	I1208 00:41:16.336691  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.336698  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:16.336703  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:16.336763  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:16.361556  832221 cri.go:89] found id: ""
	I1208 00:41:16.361579  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.361587  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:16.361592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:16.361661  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:16.386950  832221 cri.go:89] found id: ""
	I1208 00:41:16.386964  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.386971  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:16.386977  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:16.387045  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:16.413845  832221 cri.go:89] found id: ""
	I1208 00:41:16.413867  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.413877  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:16.413883  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:16.413949  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:16.439928  832221 cri.go:89] found id: ""
	I1208 00:41:16.439942  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.439959  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:16.439965  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:16.440030  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:16.466154  832221 cri.go:89] found id: ""
	I1208 00:41:16.466176  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.466183  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:16.466191  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:16.466201  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.533106  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:16.533124  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:16.563727  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:16.563742  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:16.633732  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:16.633751  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:16.650899  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:16.650917  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:16.719345  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.221010  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:19.231342  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:19.231406  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:19.257316  832221 cri.go:89] found id: ""
	I1208 00:41:19.257330  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.257337  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:19.257343  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:19.257401  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:19.283560  832221 cri.go:89] found id: ""
	I1208 00:41:19.283574  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.283581  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:19.283586  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:19.283645  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:19.309316  832221 cri.go:89] found id: ""
	I1208 00:41:19.309332  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.309339  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:19.309344  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:19.309404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:19.336530  832221 cri.go:89] found id: ""
	I1208 00:41:19.336544  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.336551  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:19.336558  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:19.336617  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:19.362493  832221 cri.go:89] found id: ""
	I1208 00:41:19.362507  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.362515  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:19.362520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:19.362580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:19.388582  832221 cri.go:89] found id: ""
	I1208 00:41:19.388602  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.388609  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:19.388614  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:19.388671  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:19.414534  832221 cri.go:89] found id: ""
	I1208 00:41:19.414547  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.414554  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:19.414562  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:19.414573  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:19.478886  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.478896  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:19.478908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:19.547311  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:19.547330  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:19.577785  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:19.577801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:19.643881  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:19.643902  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.161081  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:22.171521  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:22.171585  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:22.198382  832221 cri.go:89] found id: ""
	I1208 00:41:22.198396  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.198413  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:22.198418  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:22.198474  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:22.224532  832221 cri.go:89] found id: ""
	I1208 00:41:22.224547  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.224554  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:22.224560  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:22.224618  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:22.250646  832221 cri.go:89] found id: ""
	I1208 00:41:22.250660  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.250667  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:22.250672  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:22.250738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:22.276120  832221 cri.go:89] found id: ""
	I1208 00:41:22.276134  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.276141  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:22.276146  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:22.276204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:22.307378  832221 cri.go:89] found id: ""
	I1208 00:41:22.307392  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.307399  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:22.307405  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:22.307481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:22.332887  832221 cri.go:89] found id: ""
	I1208 00:41:22.332902  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.332909  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:22.332915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:22.332973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:22.359765  832221 cri.go:89] found id: ""
	I1208 00:41:22.359790  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.359799  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:22.359806  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:22.359817  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:22.429639  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:22.429667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.446411  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:22.446429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:22.514425  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:22.514437  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:22.514449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:22.582646  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:22.582668  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.113244  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:25.123522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:25.123581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:25.149789  832221 cri.go:89] found id: ""
	I1208 00:41:25.149803  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.149811  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:25.149816  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:25.149877  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:25.175748  832221 cri.go:89] found id: ""
	I1208 00:41:25.175780  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.175787  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:25.175793  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:25.175860  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:25.201633  832221 cri.go:89] found id: ""
	I1208 00:41:25.201647  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.201654  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:25.201660  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:25.201718  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:25.226256  832221 cri.go:89] found id: ""
	I1208 00:41:25.226270  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.226276  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:25.226282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:25.226340  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:25.251247  832221 cri.go:89] found id: ""
	I1208 00:41:25.251260  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.251267  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:25.251272  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:25.251332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:25.276489  832221 cri.go:89] found id: ""
	I1208 00:41:25.276502  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.276509  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:25.276514  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:25.276571  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:25.304102  832221 cri.go:89] found id: ""
	I1208 00:41:25.304116  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.304123  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:25.304131  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:25.304141  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.334560  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:25.334578  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:25.403772  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:25.403794  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:25.420560  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:25.420577  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:25.482668  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:25.482678  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:25.482689  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.050629  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:28.061960  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:28.062020  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:28.089309  832221 cri.go:89] found id: ""
	I1208 00:41:28.089322  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.089330  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:28.089335  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:28.089394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:28.114535  832221 cri.go:89] found id: ""
	I1208 00:41:28.114549  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.114556  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:28.114561  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:28.114620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:28.139191  832221 cri.go:89] found id: ""
	I1208 00:41:28.139205  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.139212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:28.139218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:28.139281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:28.169942  832221 cri.go:89] found id: ""
	I1208 00:41:28.169956  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.169963  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:28.169968  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:28.170026  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:28.194906  832221 cri.go:89] found id: ""
	I1208 00:41:28.194920  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.194927  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:28.194932  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:28.194991  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:28.220745  832221 cri.go:89] found id: ""
	I1208 00:41:28.220759  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.220766  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:28.220772  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:28.220831  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:28.246098  832221 cri.go:89] found id: ""
	I1208 00:41:28.246113  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.246128  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:28.246137  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:28.246147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:28.311151  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:28.311171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:28.328051  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:28.328067  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:28.392162  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:28.392172  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:28.392183  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.461355  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:28.461376  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:30.991861  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:31.002524  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:31.002603  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:31.053691  832221 cri.go:89] found id: ""
	I1208 00:41:31.053708  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.053715  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:31.053725  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:31.053785  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:31.089132  832221 cri.go:89] found id: ""
	I1208 00:41:31.089146  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.089163  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:31.089169  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:31.089252  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:31.121093  832221 cri.go:89] found id: ""
	I1208 00:41:31.121107  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.121114  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:31.121120  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:31.121193  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:31.148473  832221 cri.go:89] found id: ""
	I1208 00:41:31.148502  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.148510  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:31.148517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:31.148576  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:31.174204  832221 cri.go:89] found id: ""
	I1208 00:41:31.174218  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.174225  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:31.174231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:31.174291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:31.199996  832221 cri.go:89] found id: ""
	I1208 00:41:31.200009  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.200016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:31.200021  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:31.200079  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:31.224662  832221 cri.go:89] found id: ""
	I1208 00:41:31.224674  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.224681  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:31.224689  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:31.224699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:31.291397  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:31.291417  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:31.308061  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:31.308078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:31.372069  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:31.372079  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:31.372089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:31.443951  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:31.443972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:33.976603  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:33.987054  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:33.987113  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:34.031182  832221 cri.go:89] found id: ""
	I1208 00:41:34.031197  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.031205  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:34.031211  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:34.031285  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:34.060124  832221 cri.go:89] found id: ""
	I1208 00:41:34.060137  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.060145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:34.060150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:34.060207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:34.092539  832221 cri.go:89] found id: ""
	I1208 00:41:34.092553  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.092560  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:34.092565  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:34.092627  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:34.121995  832221 cri.go:89] found id: ""
	I1208 00:41:34.122009  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.122016  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:34.122022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:34.122077  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:34.150463  832221 cri.go:89] found id: ""
	I1208 00:41:34.150476  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.150483  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:34.150488  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:34.150549  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:34.177998  832221 cri.go:89] found id: ""
	I1208 00:41:34.178021  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.178029  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:34.178034  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:34.178102  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:34.202722  832221 cri.go:89] found id: ""
	I1208 00:41:34.202737  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.202744  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:34.202751  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:34.202761  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:34.267650  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:34.267670  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:34.284346  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:34.284364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:34.348837  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:34.348848  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:34.348858  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:34.417091  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:34.417112  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:36.948347  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:36.958825  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:36.958908  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:36.984186  832221 cri.go:89] found id: ""
	I1208 00:41:36.984200  832221 logs.go:282] 0 containers: []
	W1208 00:41:36.984207  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:36.984212  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:36.984269  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:37.020431  832221 cri.go:89] found id: ""
	I1208 00:41:37.020446  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.020454  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:37.020460  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:37.020530  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:37.067191  832221 cri.go:89] found id: ""
	I1208 00:41:37.067205  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.067212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:37.067218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:37.067294  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:37.094272  832221 cri.go:89] found id: ""
	I1208 00:41:37.094286  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.094293  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:37.094298  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:37.094355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:37.119686  832221 cri.go:89] found id: ""
	I1208 00:41:37.119709  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.119716  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:37.119722  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:37.119787  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:37.145200  832221 cri.go:89] found id: ""
	I1208 00:41:37.145214  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.145221  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:37.145227  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:37.145288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:37.171336  832221 cri.go:89] found id: ""
	I1208 00:41:37.171350  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.171357  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:37.171364  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:37.171375  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:37.237645  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:37.237664  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:37.254543  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:37.254560  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:37.322370  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:37.322380  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:37.322392  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:37.391923  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:37.391943  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:39.926099  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:39.936345  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:39.936412  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:39.962579  832221 cri.go:89] found id: ""
	I1208 00:41:39.962593  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.962600  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:39.962605  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:39.962669  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:39.989842  832221 cri.go:89] found id: ""
	I1208 00:41:39.989856  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.989863  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:39.989868  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:39.989926  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:40.044295  832221 cri.go:89] found id: ""
	I1208 00:41:40.044310  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.044325  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:40.044339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:40.044416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:40.079243  832221 cri.go:89] found id: ""
	I1208 00:41:40.079258  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.079266  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:40.079273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:40.079349  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:40.112934  832221 cri.go:89] found id: ""
	I1208 00:41:40.112948  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.112956  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:40.112961  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:40.113039  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:40.143499  832221 cri.go:89] found id: ""
	I1208 00:41:40.143513  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.143521  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:40.143526  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:40.143587  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:40.169504  832221 cri.go:89] found id: ""
	I1208 00:41:40.169519  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.169526  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:40.169533  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:40.169544  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:40.235615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:40.235638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:40.252840  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:40.252857  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:40.321804  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:40.321814  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:40.321827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:40.390368  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:40.390389  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:42.923500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:42.933619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:42.933678  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:42.959506  832221 cri.go:89] found id: ""
	I1208 00:41:42.959520  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.959527  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:42.959533  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:42.959596  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:42.984924  832221 cri.go:89] found id: ""
	I1208 00:41:42.984937  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.984946  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:42.984951  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:42.985013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:43.023875  832221 cri.go:89] found id: ""
	I1208 00:41:43.023889  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.023896  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:43.023903  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:43.023962  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:43.053076  832221 cri.go:89] found id: ""
	I1208 00:41:43.053090  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.053097  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:43.053102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:43.053185  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:43.084087  832221 cri.go:89] found id: ""
	I1208 00:41:43.084101  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.084108  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:43.084113  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:43.084174  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:43.109712  832221 cri.go:89] found id: ""
	I1208 00:41:43.109737  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.109746  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:43.109751  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:43.109817  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:43.134863  832221 cri.go:89] found id: ""
	I1208 00:41:43.134877  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.134886  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:43.134894  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:43.134908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:43.201957  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:43.201967  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:43.201982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:43.273086  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:43.273107  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:43.305154  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:43.305177  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:43.373686  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:43.373708  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:45.892403  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:45.902913  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:45.902990  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:45.927841  832221 cri.go:89] found id: ""
	I1208 00:41:45.927855  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.927862  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:45.927868  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:45.927927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:45.952154  832221 cri.go:89] found id: ""
	I1208 00:41:45.952167  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.952174  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:45.952179  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:45.952236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:45.979675  832221 cri.go:89] found id: ""
	I1208 00:41:45.979688  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.979696  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:45.979700  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:45.979755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:46.013259  832221 cri.go:89] found id: ""
	I1208 00:41:46.013273  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.013280  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:46.013285  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:46.013351  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:46.042352  832221 cri.go:89] found id: ""
	I1208 00:41:46.042366  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.042372  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:46.042377  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:46.042440  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:46.070733  832221 cri.go:89] found id: ""
	I1208 00:41:46.070746  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.070753  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:46.070763  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:46.070823  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:46.098473  832221 cri.go:89] found id: ""
	I1208 00:41:46.098487  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.098494  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:46.098502  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:46.098512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:46.125193  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:46.125209  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:46.193253  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:46.193274  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:46.210082  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:46.210099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:46.276709  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:46.276719  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:46.276730  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:48.845307  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:48.856005  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:48.856069  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:48.880627  832221 cri.go:89] found id: ""
	I1208 00:41:48.880643  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.880650  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:48.880655  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:48.880723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:48.910676  832221 cri.go:89] found id: ""
	I1208 00:41:48.910691  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.910699  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:48.910704  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:48.910765  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:48.937001  832221 cri.go:89] found id: ""
	I1208 00:41:48.937015  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.937022  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:48.937027  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:48.937087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:48.961464  832221 cri.go:89] found id: ""
	I1208 00:41:48.961478  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.961484  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:48.961489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:48.961546  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:48.985593  832221 cri.go:89] found id: ""
	I1208 00:41:48.985607  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.985614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:48.985618  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:48.985673  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:49.021903  832221 cri.go:89] found id: ""
	I1208 00:41:49.021917  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.021924  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:49.021929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:49.021987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:49.051822  832221 cri.go:89] found id: ""
	I1208 00:41:49.051835  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.051842  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:49.051850  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:49.051860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:49.119331  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:49.119350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:49.136412  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:49.136429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:49.209120  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:49.209130  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:49.209142  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:49.281668  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:49.281696  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:51.816189  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:51.826432  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:51.826508  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:51.852549  832221 cri.go:89] found id: ""
	I1208 00:41:51.852563  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.852570  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:51.852575  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:51.852639  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:51.882102  832221 cri.go:89] found id: ""
	I1208 00:41:51.882115  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.882123  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:51.882128  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:51.882183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:51.908918  832221 cri.go:89] found id: ""
	I1208 00:41:51.908931  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.908938  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:51.908943  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:51.908999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:51.933704  832221 cri.go:89] found id: ""
	I1208 00:41:51.933718  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.933725  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:51.933731  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:51.933786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:51.959460  832221 cri.go:89] found id: ""
	I1208 00:41:51.959474  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.959480  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:51.959485  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:51.959543  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:51.985138  832221 cri.go:89] found id: ""
	I1208 00:41:51.985151  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.985158  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:51.985170  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:51.985229  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:52.017078  832221 cri.go:89] found id: ""
	I1208 00:41:52.017092  832221 logs.go:282] 0 containers: []
	W1208 00:41:52.017100  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:52.017108  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:52.017118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:52.061579  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:52.061595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:52.130427  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:52.130446  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:52.146893  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:52.146909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:52.216088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:52.216098  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:52.216109  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:54.782500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:54.793061  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:54.793123  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:54.818661  832221 cri.go:89] found id: ""
	I1208 00:41:54.818675  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.818682  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:54.818688  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:54.818747  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:54.843336  832221 cri.go:89] found id: ""
	I1208 00:41:54.843351  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.843358  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:54.843363  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:54.843423  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:54.873031  832221 cri.go:89] found id: ""
	I1208 00:41:54.873045  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.873052  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:54.873057  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:54.873114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:54.904194  832221 cri.go:89] found id: ""
	I1208 00:41:54.904208  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.904215  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:54.904221  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:54.904281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:54.928355  832221 cri.go:89] found id: ""
	I1208 00:41:54.928370  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.928377  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:54.928382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:54.928441  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:54.954187  832221 cri.go:89] found id: ""
	I1208 00:41:54.954201  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.954208  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:54.954214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:54.954277  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:54.979288  832221 cri.go:89] found id: ""
	I1208 00:41:54.979301  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.979308  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:54.979316  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:54.979329  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:55.047402  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:55.047422  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:55.065193  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:55.065210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:55.134035  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:55.134045  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:55.134056  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:55.202635  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:55.202656  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:57.732860  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:57.743009  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:57.743070  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:57.769255  832221 cri.go:89] found id: ""
	I1208 00:41:57.769270  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.769277  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:57.769282  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:57.769341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:57.796071  832221 cri.go:89] found id: ""
	I1208 00:41:57.796084  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.796092  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:57.796097  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:57.796152  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:57.821305  832221 cri.go:89] found id: ""
	I1208 00:41:57.821319  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.821326  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:57.821331  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:57.821389  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:57.850632  832221 cri.go:89] found id: ""
	I1208 00:41:57.850646  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.850653  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:57.850658  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:57.850715  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:57.874739  832221 cri.go:89] found id: ""
	I1208 00:41:57.874753  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.874760  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:57.874766  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:57.874829  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:57.898660  832221 cri.go:89] found id: ""
	I1208 00:41:57.898674  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.898681  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:57.898687  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:57.898744  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:57.924451  832221 cri.go:89] found id: ""
	I1208 00:41:57.924465  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.924472  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:57.924480  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:57.924490  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:57.990717  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:57.990739  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:58.009617  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:58.009637  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:58.089328  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:58.089339  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:58.089350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:58.158129  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:58.158149  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:00.692822  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:00.703351  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:00.703413  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:00.730817  832221 cri.go:89] found id: ""
	I1208 00:42:00.730831  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.730838  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:00.730864  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:00.730925  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:00.757577  832221 cri.go:89] found id: ""
	I1208 00:42:00.757591  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.757599  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:00.757604  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:00.757668  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:00.784124  832221 cri.go:89] found id: ""
	I1208 00:42:00.784140  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.784147  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:00.784153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:00.784213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:00.811121  832221 cri.go:89] found id: ""
	I1208 00:42:00.811136  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.811143  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:00.811149  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:00.811207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:00.838124  832221 cri.go:89] found id: ""
	I1208 00:42:00.838139  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.838147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:00.838153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:00.838216  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:00.864699  832221 cri.go:89] found id: ""
	I1208 00:42:00.864713  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.864720  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:00.864726  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:00.864786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:00.890750  832221 cri.go:89] found id: ""
	I1208 00:42:00.890772  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.890780  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:00.890788  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:00.890799  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:00.956810  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:00.956830  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:00.973943  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:00.973959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:01.050555  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:01.050566  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:01.050579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:01.129234  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:01.129257  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:03.659413  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:03.669877  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:03.669937  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:03.696297  832221 cri.go:89] found id: ""
	I1208 00:42:03.696316  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.696324  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:03.696329  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:03.696388  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:03.722691  832221 cri.go:89] found id: ""
	I1208 00:42:03.722706  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.722713  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:03.722718  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:03.722777  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:03.749319  832221 cri.go:89] found id: ""
	I1208 00:42:03.749336  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.749343  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:03.749348  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:03.749409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:03.778235  832221 cri.go:89] found id: ""
	I1208 00:42:03.778250  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.778257  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:03.778262  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:03.778323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:03.805566  832221 cri.go:89] found id: ""
	I1208 00:42:03.805579  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.805586  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:03.805592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:03.805656  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:03.835418  832221 cri.go:89] found id: ""
	I1208 00:42:03.835434  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.835441  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:03.835447  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:03.835507  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:03.862034  832221 cri.go:89] found id: ""
	I1208 00:42:03.862048  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.862056  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:03.862063  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:03.862074  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:03.926004  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:03.926014  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:03.926025  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:03.994473  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:03.994491  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:04.028498  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:04.028530  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:04.103887  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:04.103913  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:06.621744  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:06.631952  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:06.632014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:06.656834  832221 cri.go:89] found id: ""
	I1208 00:42:06.656847  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.656855  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:06.656859  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:06.656915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:06.681945  832221 cri.go:89] found id: ""
	I1208 00:42:06.681960  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.681967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:06.681972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:06.682029  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:06.710714  832221 cri.go:89] found id: ""
	I1208 00:42:06.710728  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.710735  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:06.710741  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:06.710798  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:06.737689  832221 cri.go:89] found id: ""
	I1208 00:42:06.737703  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.737710  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:06.737716  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:06.737773  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:06.763380  832221 cri.go:89] found id: ""
	I1208 00:42:06.763394  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.763401  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:06.763406  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:06.763468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:06.788657  832221 cri.go:89] found id: ""
	I1208 00:42:06.788672  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.788679  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:06.788684  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:06.788743  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:06.814619  832221 cri.go:89] found id: ""
	I1208 00:42:06.814633  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.814641  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:06.814648  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:06.814659  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:06.876947  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:06.876957  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:06.876967  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:06.945083  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:06.945103  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:06.975476  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:06.975492  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:07.049079  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:07.049111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.568507  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:09.578816  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:09.578896  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:09.604243  832221 cri.go:89] found id: ""
	I1208 00:42:09.604264  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.604271  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:09.604276  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:09.604335  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:09.629065  832221 cri.go:89] found id: ""
	I1208 00:42:09.629079  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.629086  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:09.629091  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:09.629187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:09.657275  832221 cri.go:89] found id: ""
	I1208 00:42:09.657288  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.657295  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:09.657300  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:09.657356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:09.683416  832221 cri.go:89] found id: ""
	I1208 00:42:09.683431  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.683438  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:09.683443  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:09.683500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:09.709238  832221 cri.go:89] found id: ""
	I1208 00:42:09.709261  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.709269  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:09.709274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:09.709339  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:09.734114  832221 cri.go:89] found id: ""
	I1208 00:42:09.734128  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.734134  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:09.734152  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:09.734209  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:09.759311  832221 cri.go:89] found id: ""
	I1208 00:42:09.759325  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.759331  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:09.759339  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:09.759349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:09.824496  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:09.824516  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.841803  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:09.841820  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:09.904180  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:09.904190  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:09.904207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:09.971074  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:09.971095  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:12.508051  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:12.518216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:12.518274  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:12.544077  832221 cri.go:89] found id: ""
	I1208 00:42:12.544098  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.544105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:12.544121  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:12.544183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:12.573722  832221 cri.go:89] found id: ""
	I1208 00:42:12.573737  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.573744  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:12.573749  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:12.573814  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:12.605486  832221 cri.go:89] found id: ""
	I1208 00:42:12.605500  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.605508  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:12.605513  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:12.605573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:12.630248  832221 cri.go:89] found id: ""
	I1208 00:42:12.630262  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.630269  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:12.630274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:12.630334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:12.657639  832221 cri.go:89] found id: ""
	I1208 00:42:12.657653  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.657660  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:12.657665  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:12.657729  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:12.687466  832221 cri.go:89] found id: ""
	I1208 00:42:12.687488  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.687495  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:12.687501  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:12.687560  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:12.712697  832221 cri.go:89] found id: ""
	I1208 00:42:12.712713  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.712720  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:12.712729  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:12.712740  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:12.782236  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:12.782256  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:12.798869  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:12.798890  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:12.869748  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:12.869759  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:12.869772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:12.940819  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:12.940839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:15.471472  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:15.481993  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:15.482061  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:15.508029  832221 cri.go:89] found id: ""
	I1208 00:42:15.508043  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.508050  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:15.508055  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:15.508114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:15.533198  832221 cri.go:89] found id: ""
	I1208 00:42:15.533212  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.533219  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:15.533224  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:15.533293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:15.559200  832221 cri.go:89] found id: ""
	I1208 00:42:15.559215  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.559222  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:15.559230  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:15.559292  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:15.586368  832221 cri.go:89] found id: ""
	I1208 00:42:15.586382  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.586389  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:15.586394  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:15.586463  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:15.613829  832221 cri.go:89] found id: ""
	I1208 00:42:15.613862  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.613870  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:15.613875  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:15.613939  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:15.638601  832221 cri.go:89] found id: ""
	I1208 00:42:15.638616  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.638623  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:15.638629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:15.638687  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:15.663577  832221 cri.go:89] found id: ""
	I1208 00:42:15.663592  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.663599  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:15.663606  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:15.663617  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:15.729315  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:15.729346  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:15.746062  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:15.746081  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:15.817222  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:15.817234  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:15.817246  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:15.884896  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:15.884916  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.414159  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:18.424398  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:18.424464  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:18.454155  832221 cri.go:89] found id: ""
	I1208 00:42:18.454169  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.454177  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:18.454183  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:18.454245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:18.479882  832221 cri.go:89] found id: ""
	I1208 00:42:18.479896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.479904  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:18.479909  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:18.479969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:18.505299  832221 cri.go:89] found id: ""
	I1208 00:42:18.505313  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.505320  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:18.505325  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:18.505383  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:18.532868  832221 cri.go:89] found id: ""
	I1208 00:42:18.532881  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.532889  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:18.532894  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:18.532954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:18.561651  832221 cri.go:89] found id: ""
	I1208 00:42:18.561664  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.561671  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:18.561677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:18.561735  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:18.589482  832221 cri.go:89] found id: ""
	I1208 00:42:18.589496  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.589503  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:18.589509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:18.589566  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:18.613882  832221 cri.go:89] found id: ""
	I1208 00:42:18.613896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.613904  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:18.613911  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:18.613922  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.641758  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:18.641774  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:18.717185  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:18.717210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:18.734137  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:18.734155  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:18.802653  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:18.802664  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:18.802676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.371665  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:21.383636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:21.383698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:21.408072  832221 cri.go:89] found id: ""
	I1208 00:42:21.408086  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.408093  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:21.408098  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:21.408155  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:21.432924  832221 cri.go:89] found id: ""
	I1208 00:42:21.432948  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.432955  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:21.432961  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:21.433025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:21.457883  832221 cri.go:89] found id: ""
	I1208 00:42:21.457897  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.457904  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:21.457909  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:21.457967  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:21.483388  832221 cri.go:89] found id: ""
	I1208 00:42:21.483402  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.483410  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:21.483415  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:21.483475  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:21.509434  832221 cri.go:89] found id: ""
	I1208 00:42:21.509448  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.509456  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:21.509461  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:21.509519  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:21.534437  832221 cri.go:89] found id: ""
	I1208 00:42:21.534451  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.534458  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:21.534464  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:21.534521  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:21.559919  832221 cri.go:89] found id: ""
	I1208 00:42:21.559932  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.559939  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:21.559949  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:21.559959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:21.625640  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:21.625661  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:21.645629  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:21.645648  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:21.714153  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:21.714163  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:21.714173  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.781175  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:21.781196  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:24.310973  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:24.321986  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:24.322048  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:24.348885  832221 cri.go:89] found id: ""
	I1208 00:42:24.348899  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.348906  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:24.348912  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:24.348972  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:24.378380  832221 cri.go:89] found id: ""
	I1208 00:42:24.378394  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.378401  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:24.378407  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:24.378468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:24.403905  832221 cri.go:89] found id: ""
	I1208 00:42:24.403922  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.403933  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:24.403938  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:24.404014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:24.433947  832221 cri.go:89] found id: ""
	I1208 00:42:24.433961  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.433969  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:24.433975  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:24.434037  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:24.459342  832221 cri.go:89] found id: ""
	I1208 00:42:24.459356  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.459363  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:24.459368  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:24.459429  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:24.484750  832221 cri.go:89] found id: ""
	I1208 00:42:24.484764  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.484771  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:24.484777  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:24.484832  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:24.514464  832221 cri.go:89] found id: ""
	I1208 00:42:24.514478  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.514493  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:24.514501  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:24.514512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:24.580016  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:24.580037  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:24.598055  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:24.598071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:24.664079  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:24.664089  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:24.664099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:24.733616  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:24.733639  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:27.263764  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:27.274828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:27.274913  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:27.305226  832221 cri.go:89] found id: ""
	I1208 00:42:27.305241  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.305248  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:27.305253  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:27.305312  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:27.330800  832221 cri.go:89] found id: ""
	I1208 00:42:27.330815  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.330822  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:27.330827  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:27.330914  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:27.357232  832221 cri.go:89] found id: ""
	I1208 00:42:27.357246  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.357253  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:27.357258  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:27.357314  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:27.385173  832221 cri.go:89] found id: ""
	I1208 00:42:27.385186  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.385193  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:27.385199  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:27.385264  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:27.415410  832221 cri.go:89] found id: ""
	I1208 00:42:27.415423  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.415430  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:27.415435  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:27.415491  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:27.441114  832221 cri.go:89] found id: ""
	I1208 00:42:27.441128  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.441135  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:27.441140  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:27.441204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:27.468819  832221 cri.go:89] found id: ""
	I1208 00:42:27.468833  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.468841  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:27.468849  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:27.468859  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:27.534615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:27.534638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:27.552028  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:27.552044  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:27.617298  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:27.617308  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:27.617318  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:27.685006  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:27.685026  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.213024  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:30.223536  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:30.223597  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:30.252285  832221 cri.go:89] found id: ""
	I1208 00:42:30.252299  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.252306  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:30.252311  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:30.252378  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:30.283908  832221 cri.go:89] found id: ""
	I1208 00:42:30.283922  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.283931  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:30.283936  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:30.283994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:30.318884  832221 cri.go:89] found id: ""
	I1208 00:42:30.318899  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.318906  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:30.318912  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:30.318968  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:30.349060  832221 cri.go:89] found id: ""
	I1208 00:42:30.349075  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.349082  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:30.349088  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:30.349164  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:30.376813  832221 cri.go:89] found id: ""
	I1208 00:42:30.376829  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.376837  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:30.376842  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:30.376901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:30.404729  832221 cri.go:89] found id: ""
	I1208 00:42:30.404744  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.404750  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:30.404756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:30.404819  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:30.431212  832221 cri.go:89] found id: ""
	I1208 00:42:30.431226  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.431233  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:30.431241  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:30.431251  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:30.498900  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:30.498911  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:30.498921  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:30.567676  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:30.567699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.596733  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:30.596749  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:30.662190  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:30.662211  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:33.179806  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:33.190715  832221 kubeadm.go:602] duration metric: took 4m2.701897978s to restartPrimaryControlPlane
	W1208 00:42:33.190784  832221 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:42:33.190886  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:42:33.600155  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:42:33.612954  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:42:33.620726  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:42:33.620779  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:42:33.628462  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:42:33.628471  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:42:33.628522  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:42:33.636365  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:42:33.636420  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:42:33.643722  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:42:33.651305  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:42:33.651360  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:42:33.658707  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.666176  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:42:33.666232  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.673523  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:42:33.681031  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:42:33.681086  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:42:33.688609  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:42:33.724887  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:42:33.724941  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:42:33.797997  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:42:33.798062  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:42:33.798096  832221 kubeadm.go:319] OS: Linux
	I1208 00:42:33.798139  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:42:33.798186  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:42:33.798232  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:42:33.798279  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:42:33.798325  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:42:33.798372  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:42:33.798416  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:42:33.798462  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:42:33.798507  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:42:33.859952  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:42:33.860071  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:42:33.860170  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:42:33.868067  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:42:33.869917  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:42:33.869999  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:42:33.870063  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:42:33.870137  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:42:33.870197  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:42:33.870265  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:42:33.870368  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:42:33.870448  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:42:33.870928  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:42:33.871217  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:42:33.871538  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:42:33.871740  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:42:33.871797  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:42:34.028121  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:42:34.367427  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:42:34.702083  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:42:35.025762  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:42:35.511131  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:42:35.511826  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:42:35.514836  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:42:35.516409  832221 out.go:252]   - Booting up control plane ...
	I1208 00:42:35.516507  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:42:35.516848  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:42:35.519384  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:42:35.533955  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:42:35.534084  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:42:35.541753  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:42:35.542016  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:42:35.542213  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:42:35.674531  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:42:35.674638  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:46:35.675373  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115059s
	I1208 00:46:35.675397  832221 kubeadm.go:319] 
	I1208 00:46:35.675450  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:46:35.675480  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:46:35.675578  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:46:35.675582  832221 kubeadm.go:319] 
	I1208 00:46:35.675680  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:46:35.675709  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:46:35.675738  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:46:35.675741  832221 kubeadm.go:319] 
	I1208 00:46:35.680376  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:46:35.680807  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:46:35.680915  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:46:35.681162  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:46:35.681167  832221 kubeadm.go:319] 
	I1208 00:46:35.681238  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:46:35.681347  832221 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115059s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:46:35.681436  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:46:36.099633  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:46:36.112518  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:46:36.112573  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:46:36.120714  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:46:36.120723  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:46:36.120772  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:46:36.128165  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:46:36.128218  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:46:36.135603  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:46:36.142958  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:46:36.143011  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:46:36.150557  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.158107  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:46:36.158166  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.165315  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:46:36.172678  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:46:36.172733  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:46:36.179983  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:46:36.221281  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:46:36.221576  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:46:36.304904  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:46:36.304971  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:46:36.305006  832221 kubeadm.go:319] OS: Linux
	I1208 00:46:36.305062  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:46:36.305109  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:46:36.305154  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:46:36.305201  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:46:36.305247  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:46:36.305299  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:46:36.305343  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:46:36.305391  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:46:36.305437  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:46:36.375885  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:46:36.375986  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:46:36.376075  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:46:36.387291  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:46:36.389104  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:46:36.389182  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:46:36.389272  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:46:36.389371  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:46:36.389436  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:46:36.389506  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:46:36.389559  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:46:36.389626  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:46:36.389691  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:46:36.389770  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:46:36.389858  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:46:36.389893  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:46:36.389946  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:46:37.029886  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:46:37.175943  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:46:37.229666  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:46:37.386162  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:46:37.721262  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:46:37.722365  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:46:37.726361  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:46:37.727820  832221 out.go:252]   - Booting up control plane ...
	I1208 00:46:37.727919  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:46:37.727991  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:46:37.728873  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:46:37.743822  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:46:37.744021  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:46:37.751812  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:46:37.751899  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:46:37.751935  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:46:37.878966  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:46:37.879079  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:50:37.879778  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187421s
	I1208 00:50:37.879803  832221 kubeadm.go:319] 
	I1208 00:50:37.879860  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:50:37.879893  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:50:37.879997  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:50:37.880002  832221 kubeadm.go:319] 
	I1208 00:50:37.880106  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:50:37.880137  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:50:37.880167  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:50:37.880170  832221 kubeadm.go:319] 
	I1208 00:50:37.885162  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:50:37.885617  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:50:37.885748  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:50:37.886002  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:50:37.886010  832221 kubeadm.go:319] 
	I1208 00:50:37.886091  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:50:37.886152  832221 kubeadm.go:403] duration metric: took 12m7.43140026s to StartCluster
	I1208 00:50:37.886198  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:50:37.886263  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:50:37.913929  832221 cri.go:89] found id: ""
	I1208 00:50:37.913943  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.913950  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:50:37.913956  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:50:37.914018  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:50:37.940084  832221 cri.go:89] found id: ""
	I1208 00:50:37.940099  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.940106  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:50:37.940111  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:50:37.940168  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:50:37.965369  832221 cri.go:89] found id: ""
	I1208 00:50:37.965385  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.965392  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:50:37.965397  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:50:37.965454  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:50:37.991902  832221 cri.go:89] found id: ""
	I1208 00:50:37.991916  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.991923  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:50:37.991929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:50:37.991989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:50:38.041593  832221 cri.go:89] found id: ""
	I1208 00:50:38.041607  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.041614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:50:38.041619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:50:38.041681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:50:38.082440  832221 cri.go:89] found id: ""
	I1208 00:50:38.082454  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.082461  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:50:38.082467  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:50:38.082527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:50:38.108776  832221 cri.go:89] found id: ""
	I1208 00:50:38.108794  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.108804  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:50:38.108813  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:50:38.108827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:50:38.179358  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:50:38.179368  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:50:38.179379  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:50:38.249264  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:50:38.249284  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:50:38.283297  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:50:38.283313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:50:38.352336  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:50:38.352356  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 00:50:38.370094  832221 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:50:38.370135  832221 out.go:285] * 
	W1208 00:50:38.370244  832221 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.370347  832221 out.go:285] * 
	W1208 00:50:38.372671  832221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:50:38.375987  832221 out.go:203] 
	W1208 00:50:38.377331  832221 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.377432  832221 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:50:38.377486  832221 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:50:38.378650  832221 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976141949Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976389032Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976505948Z" level=info msg="Create NRI interface"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976728531Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976803559Z" level=info msg="runtime interface created"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976871433Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976925095Z" level=info msg="runtime interface starting up..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976975737Z" level=info msg="starting plugins..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.977043373Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.97717112Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:38:28 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.863535575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=86c63571-1518-417d-8c36-88972a10f046 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864340284Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd30f3d8-2e57-4e42-9d38-12f0c72774a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864886538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2294e0c2-3c35-4ad2-b70e-1cf27e140e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865379712Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8bd0e2b4-0a84-462b-a4c0-b4ef6c82ea6b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865907537Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6aa3aa31-43f2-49f4-affe-a3c22725ca07 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.86644149Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ab7db80c-c2d4-4d6c-acf1-db4a7ce32608 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.867005106Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fe935a58-ea6c-4485-86ff-51db887cec2b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:39.583288   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:39.583841   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:39.585518   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:39.585825   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:39.587218   21224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:39 up  5:32,  0 user,  load average: 0.46, 0.26, 0.45
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:50:37 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 08 00:50:38 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:38 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:38 functional-525396 kubelet[21067]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:38 functional-525396 kubelet[21067]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:38 functional-525396 kubelet[21067]: E1208 00:50:38.075592   21067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 08 00:50:38 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:38 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:38 functional-525396 kubelet[21132]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:38 functional-525396 kubelet[21132]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:38 functional-525396 kubelet[21132]: E1208 00:50:38.815306   21132 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 08 00:50:39 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:39 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: E1208 00:50:39.565119   21217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (337.979863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-525396 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-525396 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (60.546714ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-525396 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (306.098073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-714395 image ls --format yaml --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format json --alsologtostderr                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls --format table --alsologtostderr                                                                                       │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ ssh     │ functional-714395 ssh pgrep buildkitd                                                                                                             │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ image   │ functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr                                            │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ image   │ functional-714395 image ls                                                                                                                        │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ delete  │ -p functional-714395                                                                                                                              │ functional-714395 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │ 08 Dec 25 00:23 UTC │
	│ start   │ -p functional-525396 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:23 UTC │                     │
	│ start   │ -p functional-525396 --alsologtostderr -v=8                                                                                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:32 UTC │                     │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add registry.k8s.io/pause:latest                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache add minikube-local-cache-test:functional-525396                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ functional-525396 cache delete minikube-local-cache-test:functional-525396                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl images                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ cache   │ functional-525396 cache reload                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ kubectl │ functional-525396 kubectl -- --context functional-525396 get pods                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ start   │ -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:38:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:38:25.865142  832221 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:38:25.865266  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865270  832221 out.go:374] Setting ErrFile to fd 2...
	I1208 00:38:25.865273  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865522  832221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:38:25.865905  832221 out.go:368] Setting JSON to false
	I1208 00:38:25.866798  832221 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":19238,"bootTime":1765135068,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:38:25.866898  832221 start.go:143] virtualization:  
	I1208 00:38:25.870446  832221 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:38:25.873443  832221 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:38:25.873527  832221 notify.go:221] Checking for updates...
	I1208 00:38:25.877177  832221 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:38:25.880254  832221 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:38:25.883080  832221 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:38:25.885867  832221 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:38:25.888710  832221 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:38:25.892134  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:25.892227  832221 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:38:25.926814  832221 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:38:25.926949  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:25.982933  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:25.973301038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:25.983053  832221 docker.go:319] overlay module found
	I1208 00:38:25.986144  832221 out.go:179] * Using the docker driver based on existing profile
	I1208 00:38:25.988897  832221 start.go:309] selected driver: docker
	I1208 00:38:25.988906  832221 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:25.989004  832221 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:38:25.989104  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:26.085905  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:26.075169003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:26.086340  832221 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:38:26.086364  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:26.086419  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:26.086463  832221 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:26.089599  832221 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:38:26.092632  832221 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:38:26.095593  832221 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:38:26.098465  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:26.098511  832221 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:38:26.098512  832221 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:38:26.098520  832221 cache.go:65] Caching tarball of preloaded images
	I1208 00:38:26.098640  832221 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:38:26.098648  832221 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:38:26.098767  832221 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:38:26.118762  832221 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:38:26.118779  832221 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:38:26.118798  832221 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:38:26.118832  832221 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:38:26.118982  832221 start.go:364] duration metric: took 72.616µs to acquireMachinesLock for "functional-525396"
	I1208 00:38:26.119001  832221 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:38:26.119005  832221 fix.go:54] fixHost starting: 
	I1208 00:38:26.119276  832221 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:38:26.135702  832221 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:38:26.135737  832221 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:38:26.138942  832221 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:38:26.138968  832221 machine.go:94] provisionDockerMachine start ...
	I1208 00:38:26.139048  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.156040  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.156360  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.156366  832221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:38:26.306195  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.306209  832221 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:38:26.306278  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.323547  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.323853  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.323861  832221 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:38:26.483358  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.483423  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.500892  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.501201  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.501214  832221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:38:26.651219  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:38:26.651236  832221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:38:26.651262  832221 ubuntu.go:190] setting up certificates
	I1208 00:38:26.651269  832221 provision.go:84] configureAuth start
	I1208 00:38:26.651330  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:26.668935  832221 provision.go:143] copyHostCerts
	I1208 00:38:26.669007  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:38:26.669020  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:38:26.669092  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:38:26.669226  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:38:26.669232  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:38:26.669258  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:38:26.669316  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:38:26.669319  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:38:26.669351  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:38:26.669396  832221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:38:26.882878  832221 provision.go:177] copyRemoteCerts
	I1208 00:38:26.882932  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:38:26.882976  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.900195  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.008298  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:38:27.026654  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:38:27.044245  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:38:27.061828  832221 provision.go:87] duration metric: took 410.535167ms to configureAuth
	I1208 00:38:27.061847  832221 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:38:27.062049  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:27.062144  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.079069  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:27.079387  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:27.079399  832221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:38:27.403353  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:38:27.403368  832221 machine.go:97] duration metric: took 1.264393629s to provisionDockerMachine
	I1208 00:38:27.403378  832221 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:38:27.403389  832221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:38:27.403457  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:38:27.403520  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.422294  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.531362  832221 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:38:27.534870  832221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:38:27.534888  832221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:38:27.534898  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:38:27.534950  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:38:27.535028  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:38:27.535101  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:38:27.535142  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:38:27.543303  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:27.561264  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:38:27.579215  832221 start.go:296] duration metric: took 175.824145ms for postStartSetup
	I1208 00:38:27.579284  832221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:38:27.579329  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.597098  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.699502  832221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:38:27.703953  832221 fix.go:56] duration metric: took 1.584940995s for fixHost
	I1208 00:38:27.703967  832221 start.go:83] releasing machines lock for "functional-525396", held for 1.584978296s
	I1208 00:38:27.704034  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:27.720794  832221 ssh_runner.go:195] Run: cat /version.json
	I1208 00:38:27.720838  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.721083  832221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:38:27.721126  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.740766  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.744839  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.842382  832221 ssh_runner.go:195] Run: systemctl --version
	I1208 00:38:27.933498  832221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:38:27.969664  832221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:38:27.973926  832221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:38:27.973991  832221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:38:27.981670  832221 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:38:27.981684  832221 start.go:496] detecting cgroup driver to use...
	I1208 00:38:27.981714  832221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:38:27.981757  832221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:38:27.996930  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:38:28.011523  832221 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:38:28.011601  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:38:28.029696  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:38:28.043991  832221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:38:28.162184  832221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:38:28.302345  832221 docker.go:234] disabling docker service ...
	I1208 00:38:28.302409  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:38:28.316944  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:38:28.329323  832221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:38:28.471674  832221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:38:28.594617  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:38:28.607360  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:38:28.621958  832221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:38:28.622014  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.631486  832221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:38:28.631544  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.641093  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.650549  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.660155  832221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:38:28.667958  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.676952  832221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.685235  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.693630  832221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:38:28.701133  832221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:38:28.708624  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:28.814162  832221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:38:28.986282  832221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:38:28.986346  832221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:38:28.991517  832221 start.go:564] Will wait 60s for crictl version
	I1208 00:38:28.991573  832221 ssh_runner.go:195] Run: which crictl
	I1208 00:38:28.995534  832221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:38:29.025912  832221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:38:29.025997  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.062279  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.096298  832221 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:38:29.099065  832221 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:38:29.116028  832221 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:38:29.122672  832221 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:38:29.125488  832221 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:38:29.125636  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:29.125706  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.164815  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.164827  832221 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:38:29.164879  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.195499  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.195511  832221 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:38:29.195518  832221 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:38:29.195647  832221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:38:29.195726  832221 ssh_runner.go:195] Run: crio config
	I1208 00:38:29.250138  832221 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:38:29.250159  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:29.250168  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:29.250181  832221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:38:29.250206  832221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:38:29.250329  832221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:38:29.250397  832221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:38:29.258150  832221 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:38:29.258234  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:38:29.265694  832221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:38:29.278151  832221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:38:29.290865  832221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1208 00:38:29.303277  832221 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:38:29.306745  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:29.413867  832221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:38:29.757020  832221 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:38:29.757040  832221 certs.go:195] generating shared ca certs ...
	I1208 00:38:29.757055  832221 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:38:29.757227  832221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:38:29.757282  832221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:38:29.757288  832221 certs.go:257] generating profile certs ...
	I1208 00:38:29.757406  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:38:29.757463  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:38:29.757516  832221 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:38:29.757642  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:38:29.757680  832221 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:38:29.757687  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:38:29.757715  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:38:29.757753  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:38:29.757774  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:38:29.757826  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:29.761393  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:38:29.783882  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:38:29.803461  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:38:29.822714  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:38:29.839981  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:38:29.857351  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:38:29.874240  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:38:29.890650  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:38:29.906746  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:38:29.924059  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:38:29.940748  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:38:29.958110  832221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:38:29.970093  832221 ssh_runner.go:195] Run: openssl version
	I1208 00:38:29.976075  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.983124  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:38:29.990594  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994143  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994197  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:38:30.038336  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:38:30.048261  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.057929  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:38:30.067406  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072044  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072104  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.114205  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:38:30.122367  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.130206  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:38:30.138222  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142205  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142264  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.188681  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:38:30.197066  832221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:38:30.201256  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:38:30.247635  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:38:30.290467  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:38:30.332415  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:38:30.373141  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:38:30.413979  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:38:30.454763  832221 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:30.454864  832221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:38:30.454938  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.481225  832221 cri.go:89] found id: ""
	I1208 00:38:30.481285  832221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:38:30.488799  832221 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:38:30.488808  832221 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:38:30.488859  832221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:38:30.495821  832221 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.496331  832221 kubeconfig.go:125] found "functional-525396" server: "https://192.168.49.2:8441"
	I1208 00:38:30.497560  832221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:38:30.505232  832221 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:23:53.462513047 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:38:29.298599774 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:38:30.505258  832221 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:38:30.505269  832221 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 00:38:30.505341  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.544576  832221 cri.go:89] found id: ""
	I1208 00:38:30.544636  832221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:38:30.564190  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:38:30.571945  832221 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  8 00:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  8 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  8 00:28 /etc/kubernetes/scheduler.conf
	
	I1208 00:38:30.572003  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:38:30.579767  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:38:30.588961  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.589038  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:38:30.596275  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.604001  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.604058  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.611049  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:38:30.618317  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.618369  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:38:30.625673  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:38:30.633203  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:30.679020  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.303260  832221 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.624214812s)
	I1208 00:38:32.303321  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.499121  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.557405  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.605845  832221 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:38:32.605924  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.106778  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.606873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.106818  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.606134  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.106245  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.607017  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.106011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.606401  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.106569  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.606153  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.106367  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.605995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.106910  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.606698  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.606687  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.106589  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.606067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.106823  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.606794  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.106122  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.606931  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.106765  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.606092  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.107046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.606088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.106757  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.606004  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.106996  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.606590  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.106432  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.106745  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.606390  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.106196  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.606618  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.106064  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.606867  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.106995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.606766  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.106131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.606779  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.106290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.606219  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.106089  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.607007  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.106717  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.106475  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.607046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.106582  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.606125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.107067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.606667  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.106461  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.606353  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.106471  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.606654  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.107110  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.607006  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.106780  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.606382  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.106088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.606332  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.106060  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.106803  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.606107  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.106414  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.606178  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.106868  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.606030  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.106375  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.606102  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.107011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.606304  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.106096  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.606827  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.606893  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.107045  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.606816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.106126  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.606899  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.106572  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.606111  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.606103  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.106801  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.606703  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.106595  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.606139  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.106918  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.606350  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.106147  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.606821  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.106994  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.606129  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.106114  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.606499  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.106132  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.606921  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.106736  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.606121  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.106425  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.606155  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.106763  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.106058  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.606943  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.106991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.606966  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.106181  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.606342  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.106653  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.606117  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.106026  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.606138  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:32.606213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:32.631935  832221 cri.go:89] found id: ""
	I1208 00:39:32.631949  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.631956  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:32.631962  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:32.632027  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:32.657240  832221 cri.go:89] found id: ""
	I1208 00:39:32.657260  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.657267  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:32.657273  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:32.657332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:32.686247  832221 cri.go:89] found id: ""
	I1208 00:39:32.686261  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.686269  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:32.686274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:32.686334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:32.712330  832221 cri.go:89] found id: ""
	I1208 00:39:32.712345  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.712352  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:32.712358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:32.712416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:32.738663  832221 cri.go:89] found id: ""
	I1208 00:39:32.738678  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.738685  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:32.738690  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:32.738755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:32.765710  832221 cri.go:89] found id: ""
	I1208 00:39:32.765725  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.765731  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:32.765737  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:32.765792  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:32.791480  832221 cri.go:89] found id: ""
	I1208 00:39:32.791494  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.791501  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:32.791509  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:32.791520  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:32.856630  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:32.856654  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:32.873574  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:32.873591  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:32.937953  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:32.937966  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:32.937977  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:33.008749  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:33.008776  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.542093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:35.553517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:35.553575  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:35.584212  832221 cri.go:89] found id: ""
	I1208 00:39:35.584226  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.584233  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:35.584238  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:35.584296  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:35.615871  832221 cri.go:89] found id: ""
	I1208 00:39:35.615885  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.615892  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:35.615897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:35.615954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:35.641597  832221 cri.go:89] found id: ""
	I1208 00:39:35.641611  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.641618  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:35.641623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:35.641683  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:35.667538  832221 cri.go:89] found id: ""
	I1208 00:39:35.667551  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.667567  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:35.667572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:35.667633  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:35.696105  832221 cri.go:89] found id: ""
	I1208 00:39:35.696118  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.696124  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:35.696130  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:35.696187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:35.725150  832221 cri.go:89] found id: ""
	I1208 00:39:35.725165  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.725172  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:35.725178  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:35.725236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:35.752762  832221 cri.go:89] found id: ""
	I1208 00:39:35.752776  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.752783  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:35.752791  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:35.752801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.780454  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:35.780471  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:35.846096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:35.846118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:35.863081  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:35.863098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:35.932235  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:35.932246  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:35.932259  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.502146  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:38.514634  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:38.514691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:38.548208  832221 cri.go:89] found id: ""
	I1208 00:39:38.548223  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.548230  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:38.548235  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:38.548305  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:38.579066  832221 cri.go:89] found id: ""
	I1208 00:39:38.579080  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.579087  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:38.579092  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:38.579154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:38.605928  832221 cri.go:89] found id: ""
	I1208 00:39:38.605942  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.605949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:38.605954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:38.606013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:38.631317  832221 cri.go:89] found id: ""
	I1208 00:39:38.631332  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.631339  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:38.631350  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:38.631410  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:38.657581  832221 cri.go:89] found id: ""
	I1208 00:39:38.657595  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.657602  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:38.657607  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:38.657664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:38.688104  832221 cri.go:89] found id: ""
	I1208 00:39:38.688118  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.688125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:38.688131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:38.688191  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:38.712900  832221 cri.go:89] found id: ""
	I1208 00:39:38.712914  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.712921  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:38.712929  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:38.712939  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.782215  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:38.782236  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:38.813188  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:38.813203  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:38.882554  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:38.882574  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:38.899573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:38.899590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:38.963587  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.464816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:41.476933  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:41.476994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:41.519038  832221 cri.go:89] found id: ""
	I1208 00:39:41.519052  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.519059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:41.519065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:41.519120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:41.549931  832221 cri.go:89] found id: ""
	I1208 00:39:41.549946  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.549953  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:41.549958  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:41.550016  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:41.579952  832221 cri.go:89] found id: ""
	I1208 00:39:41.579966  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.579973  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:41.579978  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:41.580038  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:41.609851  832221 cri.go:89] found id: ""
	I1208 00:39:41.609865  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.609873  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:41.609878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:41.609940  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:41.635896  832221 cri.go:89] found id: ""
	I1208 00:39:41.635910  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.635917  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:41.635923  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:41.635986  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:41.662056  832221 cri.go:89] found id: ""
	I1208 00:39:41.662083  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.662091  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:41.662097  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:41.662170  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:41.687327  832221 cri.go:89] found id: ""
	I1208 00:39:41.687342  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.687349  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:41.687357  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:41.687367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:41.753129  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:41.753148  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:41.769911  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:41.769927  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:41.838088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.838099  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:41.838111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:41.910629  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:41.910651  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:44.440476  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:44.450677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:44.450737  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:44.477661  832221 cri.go:89] found id: ""
	I1208 00:39:44.477674  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.477681  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:44.477687  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:44.477754  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:44.502810  832221 cri.go:89] found id: ""
	I1208 00:39:44.502824  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.502831  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:44.502836  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:44.502922  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:44.536158  832221 cri.go:89] found id: ""
	I1208 00:39:44.536171  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.536178  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:44.536187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:44.536245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:44.569819  832221 cri.go:89] found id: ""
	I1208 00:39:44.569832  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.569839  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:44.569844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:44.569900  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:44.596822  832221 cri.go:89] found id: ""
	I1208 00:39:44.596837  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.596844  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:44.596849  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:44.596909  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:44.626118  832221 cri.go:89] found id: ""
	I1208 00:39:44.626132  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.626139  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:44.626159  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:44.626220  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:44.651327  832221 cri.go:89] found id: ""
	I1208 00:39:44.651341  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.651348  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:44.651356  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:44.651366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:44.717153  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:44.717174  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:44.734169  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:44.734200  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:44.800240  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:44.800252  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:44.800263  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:44.873699  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:44.873729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.404232  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:47.415493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:47.415558  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:47.442934  832221 cri.go:89] found id: ""
	I1208 00:39:47.442948  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.442955  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:47.442961  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:47.443025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:47.468072  832221 cri.go:89] found id: ""
	I1208 00:39:47.468086  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.468093  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:47.468099  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:47.468169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:47.499439  832221 cri.go:89] found id: ""
	I1208 00:39:47.499452  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.499460  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:47.499465  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:47.499522  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:47.525160  832221 cri.go:89] found id: ""
	I1208 00:39:47.525173  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.525180  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:47.525186  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:47.525261  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:47.557881  832221 cri.go:89] found id: ""
	I1208 00:39:47.557902  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.557909  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:47.557915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:47.557973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:47.585993  832221 cri.go:89] found id: ""
	I1208 00:39:47.586006  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.586013  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:47.586018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:47.586074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:47.611544  832221 cri.go:89] found id: ""
	I1208 00:39:47.611559  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.611565  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:47.611573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:47.611594  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:47.673948  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:47.673960  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:47.673971  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:47.746050  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:47.746071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.778206  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:47.778228  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:47.843769  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:47.843788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.361131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:50.373118  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:50.373178  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:50.402177  832221 cri.go:89] found id: ""
	I1208 00:39:50.402192  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.402199  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:50.402204  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:50.402262  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:50.428277  832221 cri.go:89] found id: ""
	I1208 00:39:50.428291  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.428298  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:50.428303  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:50.428361  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:50.453780  832221 cri.go:89] found id: ""
	I1208 00:39:50.453793  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.453801  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:50.453806  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:50.453867  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:50.478816  832221 cri.go:89] found id: ""
	I1208 00:39:50.478830  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.478838  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:50.478887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:50.478952  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:50.506494  832221 cri.go:89] found id: ""
	I1208 00:39:50.506508  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.506516  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:50.506523  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:50.506581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:50.548254  832221 cri.go:89] found id: ""
	I1208 00:39:50.548267  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.548275  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:50.548289  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:50.548345  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:50.580999  832221 cri.go:89] found id: ""
	I1208 00:39:50.581013  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.581020  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:50.581028  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:50.581038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:50.646872  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:50.646894  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.663705  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:50.663722  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:50.731208  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:50.731220  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:50.731231  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:50.800530  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:50.800552  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:53.328838  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:53.338798  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:53.338876  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:53.364078  832221 cri.go:89] found id: ""
	I1208 00:39:53.364093  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.364100  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:53.364106  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:53.364165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:53.389870  832221 cri.go:89] found id: ""
	I1208 00:39:53.389884  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.389891  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:53.389897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:53.389955  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:53.415578  832221 cri.go:89] found id: ""
	I1208 00:39:53.415592  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.415600  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:53.415606  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:53.415664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:53.440749  832221 cri.go:89] found id: ""
	I1208 00:39:53.440763  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.440769  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:53.440775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:53.440837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:53.469528  832221 cri.go:89] found id: ""
	I1208 00:39:53.469542  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.469550  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:53.469555  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:53.469614  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:53.494205  832221 cri.go:89] found id: ""
	I1208 00:39:53.494219  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.494225  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:53.494231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:53.494286  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:53.536734  832221 cri.go:89] found id: ""
	I1208 00:39:53.536748  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.536755  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:53.536763  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:53.536773  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:53.608590  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:53.608610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:53.625117  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:53.625134  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:53.687237  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:53.687248  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:53.687258  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:53.755459  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:53.755480  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.290756  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:56.302211  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:56.302272  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:56.327085  832221 cri.go:89] found id: ""
	I1208 00:39:56.327098  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.327105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:56.327110  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:56.327165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:56.351553  832221 cri.go:89] found id: ""
	I1208 00:39:56.351567  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.351574  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:56.351579  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:56.351636  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:56.375432  832221 cri.go:89] found id: ""
	I1208 00:39:56.375445  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.375451  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:56.375456  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:56.375513  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:56.399254  832221 cri.go:89] found id: ""
	I1208 00:39:56.399267  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.399274  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:56.399282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:56.399337  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:56.424239  832221 cri.go:89] found id: ""
	I1208 00:39:56.424253  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.424260  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:56.424265  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:56.424322  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:56.447970  832221 cri.go:89] found id: ""
	I1208 00:39:56.447983  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.447990  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:56.447996  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:56.448059  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:56.480639  832221 cri.go:89] found id: ""
	I1208 00:39:56.480652  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.480659  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:56.480666  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:56.480680  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.514333  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:56.514349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:56.587248  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:56.587268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:56.604138  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:56.604156  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:56.667583  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:56.667593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:56.667605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.236478  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:59.246590  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:59.246653  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:59.274726  832221 cri.go:89] found id: ""
	I1208 00:39:59.274739  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.274746  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:59.274752  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:59.274816  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:59.302946  832221 cri.go:89] found id: ""
	I1208 00:39:59.302960  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.302967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:59.302972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:59.303036  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:59.328486  832221 cri.go:89] found id: ""
	I1208 00:39:59.328510  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.328517  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:59.328522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:59.328583  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:59.354620  832221 cri.go:89] found id: ""
	I1208 00:39:59.354638  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.354645  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:59.354651  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:59.354722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:59.379131  832221 cri.go:89] found id: ""
	I1208 00:39:59.379145  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.379152  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:59.379157  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:59.379221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:59.407900  832221 cri.go:89] found id: ""
	I1208 00:39:59.407915  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.407921  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:59.407930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:59.407999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:59.432790  832221 cri.go:89] found id: ""
	I1208 00:39:59.432804  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.432811  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:59.432819  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:59.432829  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:59.498500  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:59.498521  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:59.517843  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:59.517860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:59.592346  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:59.592356  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:59.592366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.660798  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:59.660821  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.193318  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:02.204389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:02.204452  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:02.233248  832221 cri.go:89] found id: ""
	I1208 00:40:02.233262  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.233272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:02.233277  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:02.233338  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:02.259542  832221 cri.go:89] found id: ""
	I1208 00:40:02.259555  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.259562  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:02.259567  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:02.259626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:02.284406  832221 cri.go:89] found id: ""
	I1208 00:40:02.284421  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.284428  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:02.284433  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:02.284492  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:02.314792  832221 cri.go:89] found id: ""
	I1208 00:40:02.314807  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.314815  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:02.314820  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:02.314902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:02.345720  832221 cri.go:89] found id: ""
	I1208 00:40:02.345735  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.345742  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:02.345748  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:02.345806  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:02.374260  832221 cri.go:89] found id: ""
	I1208 00:40:02.374275  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.374282  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:02.374288  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:02.374356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:02.401424  832221 cri.go:89] found id: ""
	I1208 00:40:02.401448  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.401456  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:02.401464  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:02.401477  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:02.418749  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:02.418772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:02.488580  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:02.488593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:02.488605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:02.561942  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:02.561963  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.594984  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:02.595001  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.164061  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:05.174102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:05.174162  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:05.200676  832221 cri.go:89] found id: ""
	I1208 00:40:05.200690  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.200697  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:05.200702  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:05.200762  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:05.229843  832221 cri.go:89] found id: ""
	I1208 00:40:05.229857  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.229864  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:05.229869  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:05.229923  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:05.254905  832221 cri.go:89] found id: ""
	I1208 00:40:05.254919  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.254926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:05.254930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:05.254989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:05.284106  832221 cri.go:89] found id: ""
	I1208 00:40:05.284120  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.284127  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:05.284132  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:05.284197  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:05.308626  832221 cri.go:89] found id: ""
	I1208 00:40:05.308640  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.308647  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:05.308652  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:05.308714  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:05.337161  832221 cri.go:89] found id: ""
	I1208 00:40:05.337175  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.337182  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:05.337187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:05.337268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:05.362077  832221 cri.go:89] found id: ""
	I1208 00:40:05.362091  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.362098  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:05.362105  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:05.362116  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.428096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:05.428115  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:05.445139  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:05.445161  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:05.507290  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:05.507310  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:05.507321  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:05.586340  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:05.586361  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.118998  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:08.129512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:08.129588  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:08.156251  832221 cri.go:89] found id: ""
	I1208 00:40:08.156265  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.156272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:08.156278  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:08.156344  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:08.183906  832221 cri.go:89] found id: ""
	I1208 00:40:08.183919  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.183926  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:08.183931  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:08.183987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:08.210358  832221 cri.go:89] found id: ""
	I1208 00:40:08.210372  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.210379  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:08.210384  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:08.210442  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:08.235462  832221 cri.go:89] found id: ""
	I1208 00:40:08.235476  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.235483  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:08.235489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:08.235544  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:08.261687  832221 cri.go:89] found id: ""
	I1208 00:40:08.261700  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.261707  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:08.261713  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:08.261771  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:08.285826  832221 cri.go:89] found id: ""
	I1208 00:40:08.285842  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.285849  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:08.285854  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:08.285912  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:08.312132  832221 cri.go:89] found id: ""
	I1208 00:40:08.312146  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.312153  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:08.312161  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:08.312171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:08.380160  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:08.380177  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:08.380187  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:08.455282  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:08.455305  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.490186  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:08.490207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:08.563751  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:08.563779  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.082398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:11.092581  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:11.092642  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:11.118553  832221 cri.go:89] found id: ""
	I1208 00:40:11.118568  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.118575  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:11.118580  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:11.118638  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:11.144055  832221 cri.go:89] found id: ""
	I1208 00:40:11.144070  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.144077  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:11.144082  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:11.144144  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:11.169906  832221 cri.go:89] found id: ""
	I1208 00:40:11.169919  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.169926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:11.169931  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:11.169988  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:11.197596  832221 cri.go:89] found id: ""
	I1208 00:40:11.197610  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.197617  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:11.197623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:11.197681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:11.223606  832221 cri.go:89] found id: ""
	I1208 00:40:11.223624  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.223631  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:11.223636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:11.223693  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:11.248818  832221 cri.go:89] found id: ""
	I1208 00:40:11.248832  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.248838  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:11.248844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:11.248902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:11.273540  832221 cri.go:89] found id: ""
	I1208 00:40:11.273554  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.273561  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:11.273568  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:11.273579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:11.338706  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:11.338726  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.357554  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:11.357571  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:11.420756  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:11.420767  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:11.420788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:11.489139  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:11.489157  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.024714  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:14.035808  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:14.035873  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:14.061793  832221 cri.go:89] found id: ""
	I1208 00:40:14.061807  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.061814  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:14.061819  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:14.061875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:14.090633  832221 cri.go:89] found id: ""
	I1208 00:40:14.090647  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.090654  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:14.090661  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:14.090719  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:14.115546  832221 cri.go:89] found id: ""
	I1208 00:40:14.115560  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.115567  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:14.115572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:14.115629  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:14.141065  832221 cri.go:89] found id: ""
	I1208 00:40:14.141079  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.141086  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:14.141091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:14.141154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:14.165799  832221 cri.go:89] found id: ""
	I1208 00:40:14.165814  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.165821  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:14.165826  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:14.165886  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:14.195480  832221 cri.go:89] found id: ""
	I1208 00:40:14.195494  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.195501  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:14.195506  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:14.195564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:14.220362  832221 cri.go:89] found id: ""
	I1208 00:40:14.220377  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.220384  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:14.220392  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:14.220405  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:14.287292  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:14.287303  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:14.287313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:14.356018  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:14.356038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.387237  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:14.387253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:14.454492  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:14.454512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:16.972125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:16.982309  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:16.982372  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:17.017693  832221 cri.go:89] found id: ""
	I1208 00:40:17.017706  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.017714  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:17.017719  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:17.017778  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:17.044376  832221 cri.go:89] found id: ""
	I1208 00:40:17.044391  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.044399  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:17.044404  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:17.044473  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:17.070587  832221 cri.go:89] found id: ""
	I1208 00:40:17.070601  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.070608  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:17.070613  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:17.070672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:17.095978  832221 cri.go:89] found id: ""
	I1208 00:40:17.095992  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.095999  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:17.096004  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:17.096062  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:17.122135  832221 cri.go:89] found id: ""
	I1208 00:40:17.122149  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.122156  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:17.122161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:17.122221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:17.148103  832221 cri.go:89] found id: ""
	I1208 00:40:17.148118  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.148125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:17.148131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:17.148192  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:17.172943  832221 cri.go:89] found id: ""
	I1208 00:40:17.172957  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.172964  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:17.172971  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:17.172982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:17.238368  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:17.238387  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:17.255667  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:17.255685  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:17.321644  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:17.321656  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:17.321667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:17.394476  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:17.394498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:19.927345  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:19.939629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:19.939691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:19.965406  832221 cri.go:89] found id: ""
	I1208 00:40:19.965420  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.965427  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:19.965432  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:19.965500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:19.992009  832221 cri.go:89] found id: ""
	I1208 00:40:19.992023  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.992030  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:19.992035  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:19.992098  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:20.029302  832221 cri.go:89] found id: ""
	I1208 00:40:20.029317  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.029324  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:20.029330  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:20.029399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:20.058056  832221 cri.go:89] found id: ""
	I1208 00:40:20.058071  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.058085  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:20.058091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:20.058165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:20.084189  832221 cri.go:89] found id: ""
	I1208 00:40:20.084203  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.084211  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:20.084216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:20.084291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:20.111361  832221 cri.go:89] found id: ""
	I1208 00:40:20.111376  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.111383  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:20.111389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:20.111449  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:20.141805  832221 cri.go:89] found id: ""
	I1208 00:40:20.141819  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.141826  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:20.141834  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:20.141844  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:20.169490  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:20.169506  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:20.234965  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:20.234985  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:20.252060  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:20.252078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:20.320257  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:20.320267  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:20.320280  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:22.888858  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:22.899382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:22.899447  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:22.924604  832221 cri.go:89] found id: ""
	I1208 00:40:22.924619  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.924625  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:22.924631  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:22.924698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:22.955239  832221 cri.go:89] found id: ""
	I1208 00:40:22.955253  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.955259  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:22.955264  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:22.955323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:22.981222  832221 cri.go:89] found id: ""
	I1208 00:40:22.981237  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.981244  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:22.981250  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:22.981317  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:23.011070  832221 cri.go:89] found id: ""
	I1208 00:40:23.011085  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.011092  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:23.011098  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:23.011169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:23.038240  832221 cri.go:89] found id: ""
	I1208 00:40:23.038255  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.038263  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:23.038268  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:23.038329  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:23.068452  832221 cri.go:89] found id: ""
	I1208 00:40:23.068466  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.068473  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:23.068479  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:23.068536  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:23.094006  832221 cri.go:89] found id: ""
	I1208 00:40:23.094020  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.094027  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:23.094035  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:23.094047  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:23.160498  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:23.160517  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:23.177630  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:23.177647  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:23.241245  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:23.241256  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:23.241268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:23.310140  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:23.310159  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:25.838645  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:25.849038  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:25.849104  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:25.876484  832221 cri.go:89] found id: ""
	I1208 00:40:25.876499  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.876506  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:25.876512  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:25.876574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:25.906565  832221 cri.go:89] found id: ""
	I1208 00:40:25.906579  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.906587  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:25.906592  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:25.906649  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:25.937448  832221 cri.go:89] found id: ""
	I1208 00:40:25.937463  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.937471  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:25.937476  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:25.937537  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:25.966528  832221 cri.go:89] found id: ""
	I1208 00:40:25.966542  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.966549  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:25.966554  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:25.966609  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:25.993465  832221 cri.go:89] found id: ""
	I1208 00:40:25.993480  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.993487  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:25.993493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:25.993554  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:26.022155  832221 cri.go:89] found id: ""
	I1208 00:40:26.022168  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.022175  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:26.022181  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:26.022239  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:26.049049  832221 cri.go:89] found id: ""
	I1208 00:40:26.049064  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.049072  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:26.049087  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:26.049098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:26.119386  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:26.119406  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:26.155712  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:26.155729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:26.223788  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:26.223809  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:26.245587  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:26.245610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:26.309129  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:28.809355  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:28.819547  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:28.819610  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:28.849672  832221 cri.go:89] found id: ""
	I1208 00:40:28.849687  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.849694  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:28.849700  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:28.849760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:28.880748  832221 cri.go:89] found id: ""
	I1208 00:40:28.880763  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.880769  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:28.880774  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:28.880837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:28.908198  832221 cri.go:89] found id: ""
	I1208 00:40:28.908212  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.908219  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:28.908224  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:28.908282  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:28.933130  832221 cri.go:89] found id: ""
	I1208 00:40:28.933144  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.933151  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:28.933156  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:28.933222  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:28.964126  832221 cri.go:89] found id: ""
	I1208 00:40:28.964140  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.964147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:28.964153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:28.964210  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:28.990484  832221 cri.go:89] found id: ""
	I1208 00:40:28.990499  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.990506  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:28.990512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:28.990573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:29.017806  832221 cri.go:89] found id: ""
	I1208 00:40:29.017820  832221 logs.go:282] 0 containers: []
	W1208 00:40:29.017828  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:29.017835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:29.017847  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:29.084613  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:29.084635  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:29.101973  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:29.101992  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:29.173921  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:29.173933  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:29.173944  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:29.240893  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:29.240915  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:31.777057  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:31.790721  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:31.790788  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:31.822768  832221 cri.go:89] found id: ""
	I1208 00:40:31.822783  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.822790  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:31.822795  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:31.822969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:31.848644  832221 cri.go:89] found id: ""
	I1208 00:40:31.848657  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.848672  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:31.848678  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:31.848745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:31.874088  832221 cri.go:89] found id: ""
	I1208 00:40:31.874101  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.874117  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:31.874123  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:31.874179  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:31.899211  832221 cri.go:89] found id: ""
	I1208 00:40:31.899234  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.899242  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:31.899247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:31.899316  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:31.924268  832221 cri.go:89] found id: ""
	I1208 00:40:31.924282  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.924290  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:31.924295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:31.924355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:31.950349  832221 cri.go:89] found id: ""
	I1208 00:40:31.950363  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.950370  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:31.950376  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:31.950433  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:31.979825  832221 cri.go:89] found id: ""
	I1208 00:40:31.979848  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.979856  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:31.979864  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:31.979875  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:32.045728  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:32.045748  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:32.062977  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:32.062995  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:32.127567  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:32.127579  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:32.127590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:32.195761  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:32.195782  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:34.725887  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:34.742661  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:34.742722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:34.778651  832221 cri.go:89] found id: ""
	I1208 00:40:34.778665  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.778672  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:34.778678  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:34.778736  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:34.811974  832221 cri.go:89] found id: ""
	I1208 00:40:34.811988  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.811995  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:34.812000  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:34.812057  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:34.844697  832221 cri.go:89] found id: ""
	I1208 00:40:34.844712  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.844719  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:34.844725  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:34.844782  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:34.872482  832221 cri.go:89] found id: ""
	I1208 00:40:34.872495  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.872502  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:34.872509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:34.872564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:34.898220  832221 cri.go:89] found id: ""
	I1208 00:40:34.898235  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.898242  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:34.898247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:34.898308  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:34.925442  832221 cri.go:89] found id: ""
	I1208 00:40:34.925457  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.925464  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:34.925470  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:34.925527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:34.952326  832221 cri.go:89] found id: ""
	I1208 00:40:34.952340  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.952347  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:34.952355  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:34.952367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:35.018286  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:35.018308  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:35.036568  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:35.036588  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:35.105378  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:35.105389  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:35.105403  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:35.175887  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:35.175909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:37.712873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:37.722837  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:37.722915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:37.748671  832221 cri.go:89] found id: ""
	I1208 00:40:37.748684  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.748691  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:37.748697  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:37.748760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:37.787454  832221 cri.go:89] found id: ""
	I1208 00:40:37.787467  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.787475  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:37.787479  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:37.787540  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:37.827928  832221 cri.go:89] found id: ""
	I1208 00:40:37.827942  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.827949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:37.827954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:37.828015  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:37.853248  832221 cri.go:89] found id: ""
	I1208 00:40:37.853261  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.853268  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:37.853274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:37.853333  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:37.881771  832221 cri.go:89] found id: ""
	I1208 00:40:37.881785  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.881792  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:37.881797  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:37.881862  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:37.908845  832221 cri.go:89] found id: ""
	I1208 00:40:37.908858  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.908864  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:37.908870  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:37.908927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:37.933663  832221 cri.go:89] found id: ""
	I1208 00:40:37.933676  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.933684  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:37.933691  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:37.933702  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:37.950237  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:37.950253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:38.015251  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:38.015261  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:38.015272  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:38.086877  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:38.086899  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:38.120835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:38.120851  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:40.690876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:40.701698  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:40.701757  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:40.728919  832221 cri.go:89] found id: ""
	I1208 00:40:40.728933  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.728944  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:40.728950  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:40.729006  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:40.756412  832221 cri.go:89] found id: ""
	I1208 00:40:40.756426  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.756433  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:40.756438  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:40.756496  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:40.785209  832221 cri.go:89] found id: ""
	I1208 00:40:40.785223  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.785230  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:40.785235  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:40.785293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:40.812803  832221 cri.go:89] found id: ""
	I1208 00:40:40.812816  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.812823  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:40.812828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:40.812884  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:40.841663  832221 cri.go:89] found id: ""
	I1208 00:40:40.841676  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.841683  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:40.841688  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:40.841745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:40.867267  832221 cri.go:89] found id: ""
	I1208 00:40:40.867281  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.867298  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:40.867304  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:40.867365  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:40.896639  832221 cri.go:89] found id: ""
	I1208 00:40:40.896652  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.896661  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:40.896668  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:40.896678  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:40.960376  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:40.960386  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:40.960397  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:41.032818  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:41.032839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.062752  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:41.062771  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:41.130656  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:41.130676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.649290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:43.659339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:43.659404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:43.685304  832221 cri.go:89] found id: ""
	I1208 00:40:43.685319  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.685326  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:43.685332  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:43.685394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:43.710805  832221 cri.go:89] found id: ""
	I1208 00:40:43.710820  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.710827  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:43.710856  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:43.710933  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:43.735910  832221 cri.go:89] found id: ""
	I1208 00:40:43.735923  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.735930  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:43.735936  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:43.735994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:43.776908  832221 cri.go:89] found id: ""
	I1208 00:40:43.776921  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.776928  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:43.776934  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:43.776997  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:43.809711  832221 cri.go:89] found id: ""
	I1208 00:40:43.809724  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.809731  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:43.809736  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:43.809794  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:43.838996  832221 cri.go:89] found id: ""
	I1208 00:40:43.839009  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.839016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:43.839022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:43.839087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:43.864075  832221 cri.go:89] found id: ""
	I1208 00:40:43.864088  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.864095  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:43.864103  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:43.864120  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:43.930430  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:43.930449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.948281  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:43.948301  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:44.016438  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:44.016448  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:44.016462  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:44.087788  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:44.087808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.619014  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:46.629647  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:46.629711  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:46.655337  832221 cri.go:89] found id: ""
	I1208 00:40:46.655352  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.655360  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:46.655365  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:46.655426  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:46.685122  832221 cri.go:89] found id: ""
	I1208 00:40:46.685137  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.685145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:46.685150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:46.685218  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:46.711647  832221 cri.go:89] found id: ""
	I1208 00:40:46.711661  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.711669  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:46.711674  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:46.711739  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:46.739056  832221 cri.go:89] found id: ""
	I1208 00:40:46.739070  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.739077  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:46.739082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:46.739138  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:46.777014  832221 cri.go:89] found id: ""
	I1208 00:40:46.777040  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.777047  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:46.777053  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:46.777120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:46.821392  832221 cri.go:89] found id: ""
	I1208 00:40:46.821407  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.821414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:46.821419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:46.821481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:46.847683  832221 cri.go:89] found id: ""
	I1208 00:40:46.847706  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.847714  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:46.847722  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:46.847735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.880771  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:46.880787  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:46.946188  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:46.946208  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:46.965130  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:46.965147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:47.035809  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:47.035820  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:47.035843  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.603876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:49.614271  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:49.614332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:49.640814  832221 cri.go:89] found id: ""
	I1208 00:40:49.640827  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.640834  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:49.640840  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:49.640898  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:49.670323  832221 cri.go:89] found id: ""
	I1208 00:40:49.670337  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.670345  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:49.670351  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:49.670409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:49.696270  832221 cri.go:89] found id: ""
	I1208 00:40:49.696284  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.696290  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:49.696295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:49.696353  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:49.725434  832221 cri.go:89] found id: ""
	I1208 00:40:49.725448  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.725454  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:49.725468  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:49.725525  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:49.760362  832221 cri.go:89] found id: ""
	I1208 00:40:49.760375  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.760382  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:49.760393  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:49.760450  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:49.789531  832221 cri.go:89] found id: ""
	I1208 00:40:49.789545  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.789552  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:49.789567  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:49.789637  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:49.818353  832221 cri.go:89] found id: ""
	I1208 00:40:49.818367  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.818374  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:49.818390  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:49.818401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.890934  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:49.890956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:49.919198  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:49.919214  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:49.988173  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:49.988194  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:50.007229  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:50.007249  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:50.081725  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.581991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:52.592775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:52.592847  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:52.619761  832221 cri.go:89] found id: ""
	I1208 00:40:52.619775  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.619782  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:52.619788  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:52.619853  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:52.647647  832221 cri.go:89] found id: ""
	I1208 00:40:52.647662  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.647669  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:52.647674  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:52.647761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:52.673131  832221 cri.go:89] found id: ""
	I1208 00:40:52.673145  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.673152  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:52.673161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:52.673228  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:52.699525  832221 cri.go:89] found id: ""
	I1208 00:40:52.699540  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.699547  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:52.699553  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:52.699620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:52.725467  832221 cri.go:89] found id: ""
	I1208 00:40:52.725482  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.725489  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:52.725494  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:52.725556  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:52.756767  832221 cri.go:89] found id: ""
	I1208 00:40:52.756782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.756790  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:52.756796  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:52.756855  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:52.787768  832221 cri.go:89] found id: ""
	I1208 00:40:52.787782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.787790  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:52.787797  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:52.787808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:52.817811  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:52.817827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:52.889380  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:52.889401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:52.906939  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:52.906956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:52.971866  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.971876  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:52.971889  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.544702  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:55.554800  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:55.554875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:55.581294  832221 cri.go:89] found id: ""
	I1208 00:40:55.581309  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.581316  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:55.581321  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:55.581384  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:55.609189  832221 cri.go:89] found id: ""
	I1208 00:40:55.609210  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.609217  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:55.609222  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:55.609281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:55.636121  832221 cri.go:89] found id: ""
	I1208 00:40:55.636135  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.636142  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:55.636147  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:55.636212  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:55.661670  832221 cri.go:89] found id: ""
	I1208 00:40:55.661684  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.661691  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:55.661697  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:55.661756  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:55.687332  832221 cri.go:89] found id: ""
	I1208 00:40:55.687345  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.687352  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:55.687358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:55.687416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:55.713054  832221 cri.go:89] found id: ""
	I1208 00:40:55.713069  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.713076  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:55.713082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:55.713140  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:55.742979  832221 cri.go:89] found id: ""
	I1208 00:40:55.742993  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.743000  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:55.743008  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:55.743019  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:55.761280  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:55.761297  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:55.838925  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:55.838936  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:55.838949  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.910195  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:55.910218  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:55.940346  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:55.940364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.509357  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:58.519836  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:58.519901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:58.545859  832221 cri.go:89] found id: ""
	I1208 00:40:58.545874  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.545881  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:58.545887  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:58.545948  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:58.575589  832221 cri.go:89] found id: ""
	I1208 00:40:58.575603  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.575609  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:58.575614  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:58.575672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:58.604890  832221 cri.go:89] found id: ""
	I1208 00:40:58.604905  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.604911  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:58.604917  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:58.604974  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:58.630992  832221 cri.go:89] found id: ""
	I1208 00:40:58.631006  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.631013  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:58.631018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:58.631075  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:58.656862  832221 cri.go:89] found id: ""
	I1208 00:40:58.656875  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.656882  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:58.656887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:58.656950  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:58.693729  832221 cri.go:89] found id: ""
	I1208 00:40:58.693744  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.693751  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:58.693756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:58.693815  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:58.719999  832221 cri.go:89] found id: ""
	I1208 00:40:58.720014  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.720021  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:58.720029  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:58.720040  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.787457  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:58.787475  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:58.809951  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:58.809970  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:58.877531  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:58.877584  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:58.877595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:58.944804  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:58.944823  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:01.474302  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:01.485101  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:01.485163  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:01.512067  832221 cri.go:89] found id: ""
	I1208 00:41:01.512081  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.512094  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:01.512100  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:01.512173  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:01.538625  832221 cri.go:89] found id: ""
	I1208 00:41:01.538639  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.538646  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:01.538651  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:01.538712  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:01.564246  832221 cri.go:89] found id: ""
	I1208 00:41:01.564260  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.564268  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:01.564273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:01.564341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:01.590766  832221 cri.go:89] found id: ""
	I1208 00:41:01.590780  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.590787  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:01.590793  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:01.590880  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:01.618080  832221 cri.go:89] found id: ""
	I1208 00:41:01.618095  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.618102  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:01.618107  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:01.618166  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:01.644849  832221 cri.go:89] found id: ""
	I1208 00:41:01.644864  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.644872  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:01.644878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:01.644943  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:01.670907  832221 cri.go:89] found id: ""
	I1208 00:41:01.670927  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.670945  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:01.670953  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:01.670972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:01.737140  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:01.737160  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:01.756176  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:01.756199  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:01.837855  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:01.837866  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:01.837880  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:01.907644  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:01.907665  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:04.439011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:04.449676  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:04.449738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:04.475094  832221 cri.go:89] found id: ""
	I1208 00:41:04.475107  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.475116  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:04.475122  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:04.475180  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:04.499488  832221 cri.go:89] found id: ""
	I1208 00:41:04.499502  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.499509  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:04.499514  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:04.499574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:04.524302  832221 cri.go:89] found id: ""
	I1208 00:41:04.524315  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.524322  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:04.524328  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:04.524399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:04.550178  832221 cri.go:89] found id: ""
	I1208 00:41:04.550192  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.550207  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:04.550214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:04.550290  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:04.579863  832221 cri.go:89] found id: ""
	I1208 00:41:04.579876  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.579883  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:04.579888  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:04.579947  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:04.612186  832221 cri.go:89] found id: ""
	I1208 00:41:04.612200  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.612207  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:04.612212  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:04.612268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:04.638270  832221 cri.go:89] found id: ""
	I1208 00:41:04.638291  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.638298  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:04.638305  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:04.638316  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:04.704479  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:04.704498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:04.721141  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:04.721158  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:04.791977  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:04.791987  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:04.792009  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:04.869143  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:04.869164  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:07.399175  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:07.409630  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:07.409692  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:07.436029  832221 cri.go:89] found id: ""
	I1208 00:41:07.436051  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.436059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:07.436065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:07.436133  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:07.462353  832221 cri.go:89] found id: ""
	I1208 00:41:07.462367  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.462374  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:07.462379  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:07.462438  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:07.488128  832221 cri.go:89] found id: ""
	I1208 00:41:07.488142  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.488149  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:07.488154  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:07.488217  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:07.516680  832221 cri.go:89] found id: ""
	I1208 00:41:07.516694  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.516700  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:07.516705  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:07.516761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:07.541724  832221 cri.go:89] found id: ""
	I1208 00:41:07.541738  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.541747  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:07.541752  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:07.541809  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:07.566019  832221 cri.go:89] found id: ""
	I1208 00:41:07.566033  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.566049  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:07.566055  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:07.566120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:07.590763  832221 cri.go:89] found id: ""
	I1208 00:41:07.590786  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.590793  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:07.590800  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:07.590811  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:07.655603  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:07.655627  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:07.672718  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:07.672735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:07.739768  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:07.739777  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:07.739788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:07.818332  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:07.818351  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:10.352542  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:10.362750  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:10.362807  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:10.387611  832221 cri.go:89] found id: ""
	I1208 00:41:10.387625  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.387631  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:10.387637  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:10.387702  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:10.416324  832221 cri.go:89] found id: ""
	I1208 00:41:10.416338  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.416344  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:10.416349  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:10.416407  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:10.441107  832221 cri.go:89] found id: ""
	I1208 00:41:10.441121  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.441128  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:10.441133  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:10.441199  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:10.469633  832221 cri.go:89] found id: ""
	I1208 00:41:10.469646  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.469659  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:10.469664  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:10.469723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:10.494876  832221 cri.go:89] found id: ""
	I1208 00:41:10.494890  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.494896  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:10.494902  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:10.494960  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:10.531392  832221 cri.go:89] found id: ""
	I1208 00:41:10.531407  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.531414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:10.531419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:10.531488  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:10.564042  832221 cri.go:89] found id: ""
	I1208 00:41:10.564056  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.564063  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:10.564072  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:10.564082  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:10.630069  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:10.630089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:10.647244  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:10.647260  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:10.722704  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:10.722715  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:10.722727  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:10.795845  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:10.795865  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.326398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:13.336729  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:13.336789  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:13.362204  832221 cri.go:89] found id: ""
	I1208 00:41:13.362218  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.362225  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:13.362231  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:13.362288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:13.387741  832221 cri.go:89] found id: ""
	I1208 00:41:13.387755  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.387762  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:13.387767  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:13.387825  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:13.416495  832221 cri.go:89] found id: ""
	I1208 00:41:13.416508  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.416515  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:13.416520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:13.416580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:13.442986  832221 cri.go:89] found id: ""
	I1208 00:41:13.443000  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.443008  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:13.443015  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:13.443074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:13.468540  832221 cri.go:89] found id: ""
	I1208 00:41:13.468555  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.468562  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:13.468568  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:13.468626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:13.494472  832221 cri.go:89] found id: ""
	I1208 00:41:13.494487  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.494494  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:13.494500  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:13.494561  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:13.521305  832221 cri.go:89] found id: ""
	I1208 00:41:13.521318  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.521325  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:13.521333  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:13.521347  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.553343  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:13.553359  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:13.621324  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:13.621342  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:13.638433  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:13.638450  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:13.707199  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:13.707209  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:13.707232  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.276942  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:16.286989  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:16.287051  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:16.312004  832221 cri.go:89] found id: ""
	I1208 00:41:16.312018  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.312025  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:16.312031  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:16.312090  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:16.336677  832221 cri.go:89] found id: ""
	I1208 00:41:16.336691  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.336698  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:16.336703  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:16.336763  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:16.361556  832221 cri.go:89] found id: ""
	I1208 00:41:16.361579  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.361587  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:16.361592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:16.361661  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:16.386950  832221 cri.go:89] found id: ""
	I1208 00:41:16.386964  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.386971  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:16.386977  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:16.387045  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:16.413845  832221 cri.go:89] found id: ""
	I1208 00:41:16.413867  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.413877  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:16.413883  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:16.413949  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:16.439928  832221 cri.go:89] found id: ""
	I1208 00:41:16.439942  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.439959  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:16.439965  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:16.440030  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:16.466154  832221 cri.go:89] found id: ""
	I1208 00:41:16.466176  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.466183  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:16.466191  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:16.466201  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.533106  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:16.533124  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:16.563727  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:16.563742  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:16.633732  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:16.633751  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:16.650899  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:16.650917  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:16.719345  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.221010  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:19.231342  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:19.231406  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:19.257316  832221 cri.go:89] found id: ""
	I1208 00:41:19.257330  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.257337  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:19.257343  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:19.257401  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:19.283560  832221 cri.go:89] found id: ""
	I1208 00:41:19.283574  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.283581  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:19.283586  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:19.283645  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:19.309316  832221 cri.go:89] found id: ""
	I1208 00:41:19.309332  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.309339  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:19.309344  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:19.309404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:19.336530  832221 cri.go:89] found id: ""
	I1208 00:41:19.336544  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.336551  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:19.336558  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:19.336617  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:19.362493  832221 cri.go:89] found id: ""
	I1208 00:41:19.362507  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.362515  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:19.362520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:19.362580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:19.388582  832221 cri.go:89] found id: ""
	I1208 00:41:19.388602  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.388609  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:19.388614  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:19.388671  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:19.414534  832221 cri.go:89] found id: ""
	I1208 00:41:19.414547  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.414554  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:19.414562  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:19.414573  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:19.478886  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.478896  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:19.478908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:19.547311  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:19.547330  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:19.577785  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:19.577801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:19.643881  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:19.643902  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.161081  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:22.171521  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:22.171585  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:22.198382  832221 cri.go:89] found id: ""
	I1208 00:41:22.198396  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.198413  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:22.198418  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:22.198474  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:22.224532  832221 cri.go:89] found id: ""
	I1208 00:41:22.224547  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.224554  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:22.224560  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:22.224618  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:22.250646  832221 cri.go:89] found id: ""
	I1208 00:41:22.250660  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.250667  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:22.250672  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:22.250738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:22.276120  832221 cri.go:89] found id: ""
	I1208 00:41:22.276134  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.276141  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:22.276146  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:22.276204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:22.307378  832221 cri.go:89] found id: ""
	I1208 00:41:22.307392  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.307399  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:22.307405  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:22.307481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:22.332887  832221 cri.go:89] found id: ""
	I1208 00:41:22.332902  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.332909  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:22.332915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:22.332973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:22.359765  832221 cri.go:89] found id: ""
	I1208 00:41:22.359790  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.359799  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:22.359806  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:22.359817  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:22.429639  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:22.429667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.446411  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:22.446429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:22.514425  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:22.514437  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:22.514449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:22.582646  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:22.582668  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.113244  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:25.123522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:25.123581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:25.149789  832221 cri.go:89] found id: ""
	I1208 00:41:25.149803  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.149811  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:25.149816  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:25.149877  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:25.175748  832221 cri.go:89] found id: ""
	I1208 00:41:25.175780  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.175787  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:25.175793  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:25.175860  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:25.201633  832221 cri.go:89] found id: ""
	I1208 00:41:25.201647  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.201654  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:25.201660  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:25.201718  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:25.226256  832221 cri.go:89] found id: ""
	I1208 00:41:25.226270  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.226276  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:25.226282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:25.226340  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:25.251247  832221 cri.go:89] found id: ""
	I1208 00:41:25.251260  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.251267  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:25.251272  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:25.251332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:25.276489  832221 cri.go:89] found id: ""
	I1208 00:41:25.276502  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.276509  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:25.276514  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:25.276571  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:25.304102  832221 cri.go:89] found id: ""
	I1208 00:41:25.304116  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.304123  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:25.304131  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:25.304141  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.334560  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:25.334578  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:25.403772  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:25.403794  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:25.420560  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:25.420577  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:25.482668  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:25.482678  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:25.482689  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.050629  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:28.061960  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:28.062020  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:28.089309  832221 cri.go:89] found id: ""
	I1208 00:41:28.089322  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.089330  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:28.089335  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:28.089394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:28.114535  832221 cri.go:89] found id: ""
	I1208 00:41:28.114549  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.114556  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:28.114561  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:28.114620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:28.139191  832221 cri.go:89] found id: ""
	I1208 00:41:28.139205  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.139212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:28.139218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:28.139281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:28.169942  832221 cri.go:89] found id: ""
	I1208 00:41:28.169956  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.169963  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:28.169968  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:28.170026  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:28.194906  832221 cri.go:89] found id: ""
	I1208 00:41:28.194920  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.194927  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:28.194932  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:28.194991  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:28.220745  832221 cri.go:89] found id: ""
	I1208 00:41:28.220759  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.220766  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:28.220772  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:28.220831  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:28.246098  832221 cri.go:89] found id: ""
	I1208 00:41:28.246113  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.246128  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:28.246137  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:28.246147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:28.311151  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:28.311171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:28.328051  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:28.328067  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:28.392162  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:28.392172  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:28.392183  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.461355  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:28.461376  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:30.991861  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:31.002524  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:31.002603  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:31.053691  832221 cri.go:89] found id: ""
	I1208 00:41:31.053708  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.053715  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:31.053725  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:31.053785  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:31.089132  832221 cri.go:89] found id: ""
	I1208 00:41:31.089146  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.089163  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:31.089169  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:31.089252  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:31.121093  832221 cri.go:89] found id: ""
	I1208 00:41:31.121107  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.121114  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:31.121120  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:31.121193  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:31.148473  832221 cri.go:89] found id: ""
	I1208 00:41:31.148502  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.148510  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:31.148517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:31.148576  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:31.174204  832221 cri.go:89] found id: ""
	I1208 00:41:31.174218  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.174225  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:31.174231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:31.174291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:31.199996  832221 cri.go:89] found id: ""
	I1208 00:41:31.200009  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.200016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:31.200021  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:31.200079  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:31.224662  832221 cri.go:89] found id: ""
	I1208 00:41:31.224674  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.224681  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:31.224689  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:31.224699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:31.291397  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:31.291417  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:31.308061  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:31.308078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:31.372069  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:31.372079  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:31.372089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:31.443951  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:31.443972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:33.976603  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:33.987054  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:33.987113  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:34.031182  832221 cri.go:89] found id: ""
	I1208 00:41:34.031197  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.031205  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:34.031211  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:34.031285  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:34.060124  832221 cri.go:89] found id: ""
	I1208 00:41:34.060137  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.060145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:34.060150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:34.060207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:34.092539  832221 cri.go:89] found id: ""
	I1208 00:41:34.092553  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.092560  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:34.092565  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:34.092627  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:34.121995  832221 cri.go:89] found id: ""
	I1208 00:41:34.122009  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.122016  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:34.122022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:34.122077  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:34.150463  832221 cri.go:89] found id: ""
	I1208 00:41:34.150476  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.150483  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:34.150488  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:34.150549  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:34.177998  832221 cri.go:89] found id: ""
	I1208 00:41:34.178021  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.178029  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:34.178034  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:34.178102  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:34.202722  832221 cri.go:89] found id: ""
	I1208 00:41:34.202737  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.202744  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:34.202751  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:34.202761  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:34.267650  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:34.267670  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:34.284346  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:34.284364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:34.348837  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:34.348848  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:34.348858  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:34.417091  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:34.417112  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:36.948347  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:36.958825  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:36.958908  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:36.984186  832221 cri.go:89] found id: ""
	I1208 00:41:36.984200  832221 logs.go:282] 0 containers: []
	W1208 00:41:36.984207  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:36.984212  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:36.984269  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:37.020431  832221 cri.go:89] found id: ""
	I1208 00:41:37.020446  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.020454  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:37.020460  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:37.020530  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:37.067191  832221 cri.go:89] found id: ""
	I1208 00:41:37.067205  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.067212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:37.067218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:37.067294  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:37.094272  832221 cri.go:89] found id: ""
	I1208 00:41:37.094286  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.094293  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:37.094298  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:37.094355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:37.119686  832221 cri.go:89] found id: ""
	I1208 00:41:37.119709  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.119716  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:37.119722  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:37.119787  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:37.145200  832221 cri.go:89] found id: ""
	I1208 00:41:37.145214  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.145221  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:37.145227  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:37.145288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:37.171336  832221 cri.go:89] found id: ""
	I1208 00:41:37.171350  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.171357  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:37.171364  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:37.171375  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:37.237645  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:37.237664  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:37.254543  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:37.254560  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:37.322370  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:37.322380  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:37.322392  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:37.391923  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:37.391943  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:39.926099  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:39.936345  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:39.936412  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:39.962579  832221 cri.go:89] found id: ""
	I1208 00:41:39.962593  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.962600  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:39.962605  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:39.962669  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:39.989842  832221 cri.go:89] found id: ""
	I1208 00:41:39.989856  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.989863  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:39.989868  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:39.989926  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:40.044295  832221 cri.go:89] found id: ""
	I1208 00:41:40.044310  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.044325  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:40.044339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:40.044416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:40.079243  832221 cri.go:89] found id: ""
	I1208 00:41:40.079258  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.079266  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:40.079273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:40.079349  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:40.112934  832221 cri.go:89] found id: ""
	I1208 00:41:40.112948  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.112956  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:40.112961  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:40.113039  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:40.143499  832221 cri.go:89] found id: ""
	I1208 00:41:40.143513  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.143521  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:40.143526  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:40.143587  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:40.169504  832221 cri.go:89] found id: ""
	I1208 00:41:40.169519  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.169526  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:40.169533  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:40.169544  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:40.235615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:40.235638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:40.252840  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:40.252857  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:40.321804  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:40.321814  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:40.321827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:40.390368  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:40.390389  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:42.923500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:42.933619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:42.933678  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:42.959506  832221 cri.go:89] found id: ""
	I1208 00:41:42.959520  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.959527  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:42.959533  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:42.959596  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:42.984924  832221 cri.go:89] found id: ""
	I1208 00:41:42.984937  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.984946  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:42.984951  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:42.985013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:43.023875  832221 cri.go:89] found id: ""
	I1208 00:41:43.023889  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.023896  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:43.023903  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:43.023962  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:43.053076  832221 cri.go:89] found id: ""
	I1208 00:41:43.053090  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.053097  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:43.053102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:43.053185  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:43.084087  832221 cri.go:89] found id: ""
	I1208 00:41:43.084101  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.084108  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:43.084113  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:43.084174  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:43.109712  832221 cri.go:89] found id: ""
	I1208 00:41:43.109737  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.109746  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:43.109751  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:43.109817  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:43.134863  832221 cri.go:89] found id: ""
	I1208 00:41:43.134877  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.134886  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:43.134894  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:43.134908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:43.201957  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:43.201967  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:43.201982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:43.273086  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:43.273107  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:43.305154  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:43.305177  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:43.373686  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:43.373708  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:45.892403  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:45.902913  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:45.902990  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:45.927841  832221 cri.go:89] found id: ""
	I1208 00:41:45.927855  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.927862  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:45.927868  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:45.927927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:45.952154  832221 cri.go:89] found id: ""
	I1208 00:41:45.952167  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.952174  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:45.952179  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:45.952236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:45.979675  832221 cri.go:89] found id: ""
	I1208 00:41:45.979688  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.979696  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:45.979700  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:45.979755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:46.013259  832221 cri.go:89] found id: ""
	I1208 00:41:46.013273  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.013280  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:46.013285  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:46.013351  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:46.042352  832221 cri.go:89] found id: ""
	I1208 00:41:46.042366  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.042372  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:46.042377  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:46.042440  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:46.070733  832221 cri.go:89] found id: ""
	I1208 00:41:46.070746  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.070753  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:46.070763  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:46.070823  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:46.098473  832221 cri.go:89] found id: ""
	I1208 00:41:46.098487  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.098494  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:46.098502  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:46.098512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:46.125193  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:46.125209  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:46.193253  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:46.193274  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:46.210082  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:46.210099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:46.276709  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:46.276719  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:46.276730  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:48.845307  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:48.856005  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:48.856069  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:48.880627  832221 cri.go:89] found id: ""
	I1208 00:41:48.880643  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.880650  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:48.880655  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:48.880723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:48.910676  832221 cri.go:89] found id: ""
	I1208 00:41:48.910691  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.910699  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:48.910704  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:48.910765  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:48.937001  832221 cri.go:89] found id: ""
	I1208 00:41:48.937015  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.937022  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:48.937027  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:48.937087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:48.961464  832221 cri.go:89] found id: ""
	I1208 00:41:48.961478  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.961484  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:48.961489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:48.961546  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:48.985593  832221 cri.go:89] found id: ""
	I1208 00:41:48.985607  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.985614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:48.985618  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:48.985673  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:49.021903  832221 cri.go:89] found id: ""
	I1208 00:41:49.021917  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.021924  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:49.021929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:49.021987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:49.051822  832221 cri.go:89] found id: ""
	I1208 00:41:49.051835  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.051842  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:49.051850  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:49.051860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:49.119331  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:49.119350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:49.136412  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:49.136429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:49.209120  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:49.209130  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:49.209142  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:49.281668  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:49.281696  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:51.816189  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:51.826432  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:51.826508  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:51.852549  832221 cri.go:89] found id: ""
	I1208 00:41:51.852563  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.852570  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:51.852575  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:51.852639  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:51.882102  832221 cri.go:89] found id: ""
	I1208 00:41:51.882115  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.882123  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:51.882128  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:51.882183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:51.908918  832221 cri.go:89] found id: ""
	I1208 00:41:51.908931  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.908938  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:51.908943  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:51.908999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:51.933704  832221 cri.go:89] found id: ""
	I1208 00:41:51.933718  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.933725  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:51.933731  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:51.933786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:51.959460  832221 cri.go:89] found id: ""
	I1208 00:41:51.959474  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.959480  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:51.959485  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:51.959543  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:51.985138  832221 cri.go:89] found id: ""
	I1208 00:41:51.985151  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.985158  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:51.985170  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:51.985229  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:52.017078  832221 cri.go:89] found id: ""
	I1208 00:41:52.017092  832221 logs.go:282] 0 containers: []
	W1208 00:41:52.017100  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:52.017108  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:52.017118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:52.061579  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:52.061595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:52.130427  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:52.130446  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:52.146893  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:52.146909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:52.216088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:52.216098  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:52.216109  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:54.782500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:54.793061  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:54.793123  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:54.818661  832221 cri.go:89] found id: ""
	I1208 00:41:54.818675  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.818682  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:54.818688  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:54.818747  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:54.843336  832221 cri.go:89] found id: ""
	I1208 00:41:54.843351  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.843358  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:54.843363  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:54.843423  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:54.873031  832221 cri.go:89] found id: ""
	I1208 00:41:54.873045  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.873052  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:54.873057  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:54.873114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:54.904194  832221 cri.go:89] found id: ""
	I1208 00:41:54.904208  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.904215  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:54.904221  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:54.904281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:54.928355  832221 cri.go:89] found id: ""
	I1208 00:41:54.928370  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.928377  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:54.928382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:54.928441  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:54.954187  832221 cri.go:89] found id: ""
	I1208 00:41:54.954201  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.954208  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:54.954214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:54.954277  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:54.979288  832221 cri.go:89] found id: ""
	I1208 00:41:54.979301  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.979308  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:54.979316  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:54.979329  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:55.047402  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:55.047422  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:55.065193  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:55.065210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:55.134035  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:55.134045  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:55.134056  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:55.202635  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:55.202656  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:57.732860  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:57.743009  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:57.743070  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:57.769255  832221 cri.go:89] found id: ""
	I1208 00:41:57.769270  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.769277  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:57.769282  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:57.769341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:57.796071  832221 cri.go:89] found id: ""
	I1208 00:41:57.796084  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.796092  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:57.796097  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:57.796152  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:57.821305  832221 cri.go:89] found id: ""
	I1208 00:41:57.821319  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.821326  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:57.821331  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:57.821389  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:57.850632  832221 cri.go:89] found id: ""
	I1208 00:41:57.850646  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.850653  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:57.850658  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:57.850715  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:57.874739  832221 cri.go:89] found id: ""
	I1208 00:41:57.874753  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.874760  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:57.874766  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:57.874829  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:57.898660  832221 cri.go:89] found id: ""
	I1208 00:41:57.898674  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.898681  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:57.898687  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:57.898744  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:57.924451  832221 cri.go:89] found id: ""
	I1208 00:41:57.924465  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.924472  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:57.924480  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:57.924490  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:57.990717  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:57.990739  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:58.009617  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:58.009637  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:58.089328  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:58.089339  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:58.089350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:58.158129  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:58.158149  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:00.692822  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:00.703351  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:00.703413  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:00.730817  832221 cri.go:89] found id: ""
	I1208 00:42:00.730831  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.730838  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:00.730864  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:00.730925  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:00.757577  832221 cri.go:89] found id: ""
	I1208 00:42:00.757591  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.757599  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:00.757604  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:00.757668  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:00.784124  832221 cri.go:89] found id: ""
	I1208 00:42:00.784140  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.784147  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:00.784153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:00.784213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:00.811121  832221 cri.go:89] found id: ""
	I1208 00:42:00.811136  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.811143  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:00.811149  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:00.811207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:00.838124  832221 cri.go:89] found id: ""
	I1208 00:42:00.838139  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.838147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:00.838153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:00.838216  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:00.864699  832221 cri.go:89] found id: ""
	I1208 00:42:00.864713  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.864720  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:00.864726  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:00.864786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:00.890750  832221 cri.go:89] found id: ""
	I1208 00:42:00.890772  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.890780  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:00.890788  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:00.890799  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:00.956810  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:00.956830  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:00.973943  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:00.973959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:01.050555  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:01.050566  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:01.050579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:01.129234  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:01.129257  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:03.659413  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:03.669877  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:03.669937  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:03.696297  832221 cri.go:89] found id: ""
	I1208 00:42:03.696316  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.696324  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:03.696329  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:03.696388  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:03.722691  832221 cri.go:89] found id: ""
	I1208 00:42:03.722706  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.722713  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:03.722718  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:03.722777  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:03.749319  832221 cri.go:89] found id: ""
	I1208 00:42:03.749336  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.749343  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:03.749348  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:03.749409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:03.778235  832221 cri.go:89] found id: ""
	I1208 00:42:03.778250  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.778257  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:03.778262  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:03.778323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:03.805566  832221 cri.go:89] found id: ""
	I1208 00:42:03.805579  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.805586  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:03.805592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:03.805656  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:03.835418  832221 cri.go:89] found id: ""
	I1208 00:42:03.835434  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.835441  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:03.835447  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:03.835507  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:03.862034  832221 cri.go:89] found id: ""
	I1208 00:42:03.862048  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.862056  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:03.862063  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:03.862074  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:03.926004  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:03.926014  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:03.926025  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:03.994473  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:03.994491  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:04.028498  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:04.028530  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:04.103887  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:04.103913  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:06.621744  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:06.631952  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:06.632014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:06.656834  832221 cri.go:89] found id: ""
	I1208 00:42:06.656847  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.656855  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:06.656859  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:06.656915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:06.681945  832221 cri.go:89] found id: ""
	I1208 00:42:06.681960  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.681967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:06.681972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:06.682029  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:06.710714  832221 cri.go:89] found id: ""
	I1208 00:42:06.710728  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.710735  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:06.710741  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:06.710798  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:06.737689  832221 cri.go:89] found id: ""
	I1208 00:42:06.737703  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.737710  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:06.737716  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:06.737773  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:06.763380  832221 cri.go:89] found id: ""
	I1208 00:42:06.763394  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.763401  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:06.763406  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:06.763468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:06.788657  832221 cri.go:89] found id: ""
	I1208 00:42:06.788672  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.788679  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:06.788684  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:06.788743  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:06.814619  832221 cri.go:89] found id: ""
	I1208 00:42:06.814633  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.814641  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:06.814648  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:06.814659  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:06.876947  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:06.876957  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:06.876967  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:06.945083  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:06.945103  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:06.975476  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:06.975492  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:07.049079  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:07.049111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.568507  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:09.578816  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:09.578896  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:09.604243  832221 cri.go:89] found id: ""
	I1208 00:42:09.604264  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.604271  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:09.604276  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:09.604335  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:09.629065  832221 cri.go:89] found id: ""
	I1208 00:42:09.629079  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.629086  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:09.629091  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:09.629187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:09.657275  832221 cri.go:89] found id: ""
	I1208 00:42:09.657288  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.657295  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:09.657300  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:09.657356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:09.683416  832221 cri.go:89] found id: ""
	I1208 00:42:09.683431  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.683438  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:09.683443  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:09.683500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:09.709238  832221 cri.go:89] found id: ""
	I1208 00:42:09.709261  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.709269  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:09.709274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:09.709339  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:09.734114  832221 cri.go:89] found id: ""
	I1208 00:42:09.734128  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.734134  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:09.734152  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:09.734209  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:09.759311  832221 cri.go:89] found id: ""
	I1208 00:42:09.759325  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.759331  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:09.759339  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:09.759349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:09.824496  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:09.824516  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.841803  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:09.841820  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:09.904180  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:09.904190  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:09.904207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:09.971074  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:09.971095  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:12.508051  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:12.518216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:12.518274  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:12.544077  832221 cri.go:89] found id: ""
	I1208 00:42:12.544098  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.544105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:12.544121  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:12.544183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:12.573722  832221 cri.go:89] found id: ""
	I1208 00:42:12.573737  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.573744  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:12.573749  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:12.573814  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:12.605486  832221 cri.go:89] found id: ""
	I1208 00:42:12.605500  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.605508  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:12.605513  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:12.605573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:12.630248  832221 cri.go:89] found id: ""
	I1208 00:42:12.630262  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.630269  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:12.630274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:12.630334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:12.657639  832221 cri.go:89] found id: ""
	I1208 00:42:12.657653  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.657660  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:12.657665  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:12.657729  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:12.687466  832221 cri.go:89] found id: ""
	I1208 00:42:12.687488  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.687495  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:12.687501  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:12.687560  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:12.712697  832221 cri.go:89] found id: ""
	I1208 00:42:12.712713  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.712720  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:12.712729  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:12.712740  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:12.782236  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:12.782256  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:12.798869  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:12.798890  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:12.869748  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:12.869759  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:12.869772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:12.940819  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:12.940839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:15.471472  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:15.481993  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:15.482061  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:15.508029  832221 cri.go:89] found id: ""
	I1208 00:42:15.508043  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.508050  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:15.508055  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:15.508114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:15.533198  832221 cri.go:89] found id: ""
	I1208 00:42:15.533212  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.533219  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:15.533224  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:15.533293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:15.559200  832221 cri.go:89] found id: ""
	I1208 00:42:15.559215  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.559222  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:15.559230  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:15.559292  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:15.586368  832221 cri.go:89] found id: ""
	I1208 00:42:15.586382  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.586389  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:15.586394  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:15.586463  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:15.613829  832221 cri.go:89] found id: ""
	I1208 00:42:15.613862  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.613870  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:15.613875  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:15.613939  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:15.638601  832221 cri.go:89] found id: ""
	I1208 00:42:15.638616  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.638623  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:15.638629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:15.638687  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:15.663577  832221 cri.go:89] found id: ""
	I1208 00:42:15.663592  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.663599  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:15.663606  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:15.663617  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:15.729315  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:15.729346  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:15.746062  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:15.746081  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:15.817222  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:15.817234  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:15.817246  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:15.884896  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:15.884916  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.414159  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:18.424398  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:18.424464  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:18.454155  832221 cri.go:89] found id: ""
	I1208 00:42:18.454169  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.454177  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:18.454183  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:18.454245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:18.479882  832221 cri.go:89] found id: ""
	I1208 00:42:18.479896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.479904  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:18.479909  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:18.479969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:18.505299  832221 cri.go:89] found id: ""
	I1208 00:42:18.505313  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.505320  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:18.505325  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:18.505383  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:18.532868  832221 cri.go:89] found id: ""
	I1208 00:42:18.532881  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.532889  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:18.532894  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:18.532954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:18.561651  832221 cri.go:89] found id: ""
	I1208 00:42:18.561664  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.561671  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:18.561677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:18.561735  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:18.589482  832221 cri.go:89] found id: ""
	I1208 00:42:18.589496  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.589503  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:18.589509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:18.589566  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:18.613882  832221 cri.go:89] found id: ""
	I1208 00:42:18.613896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.613904  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:18.613911  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:18.613922  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.641758  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:18.641774  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:18.717185  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:18.717210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:18.734137  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:18.734155  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:18.802653  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:18.802664  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:18.802676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.371665  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:21.383636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:21.383698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:21.408072  832221 cri.go:89] found id: ""
	I1208 00:42:21.408086  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.408093  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:21.408098  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:21.408155  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:21.432924  832221 cri.go:89] found id: ""
	I1208 00:42:21.432948  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.432955  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:21.432961  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:21.433025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:21.457883  832221 cri.go:89] found id: ""
	I1208 00:42:21.457897  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.457904  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:21.457909  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:21.457967  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:21.483388  832221 cri.go:89] found id: ""
	I1208 00:42:21.483402  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.483410  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:21.483415  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:21.483475  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:21.509434  832221 cri.go:89] found id: ""
	I1208 00:42:21.509448  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.509456  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:21.509461  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:21.509519  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:21.534437  832221 cri.go:89] found id: ""
	I1208 00:42:21.534451  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.534458  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:21.534464  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:21.534521  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:21.559919  832221 cri.go:89] found id: ""
	I1208 00:42:21.559932  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.559939  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:21.559949  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:21.559959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:21.625640  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:21.625661  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:21.645629  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:21.645648  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:21.714153  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:21.714163  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:21.714173  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.781175  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:21.781196  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:24.310973  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:24.321986  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:24.322048  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:24.348885  832221 cri.go:89] found id: ""
	I1208 00:42:24.348899  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.348906  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:24.348912  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:24.348972  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:24.378380  832221 cri.go:89] found id: ""
	I1208 00:42:24.378394  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.378401  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:24.378407  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:24.378468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:24.403905  832221 cri.go:89] found id: ""
	I1208 00:42:24.403922  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.403933  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:24.403938  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:24.404014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:24.433947  832221 cri.go:89] found id: ""
	I1208 00:42:24.433961  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.433969  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:24.433975  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:24.434037  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:24.459342  832221 cri.go:89] found id: ""
	I1208 00:42:24.459356  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.459363  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:24.459368  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:24.459429  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:24.484750  832221 cri.go:89] found id: ""
	I1208 00:42:24.484764  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.484771  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:24.484777  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:24.484832  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:24.514464  832221 cri.go:89] found id: ""
	I1208 00:42:24.514478  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.514493  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:24.514501  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:24.514512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:24.580016  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:24.580037  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:24.598055  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:24.598071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:24.664079  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:24.664089  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:24.664099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:24.733616  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:24.733639  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:27.263764  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:27.274828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:27.274913  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:27.305226  832221 cri.go:89] found id: ""
	I1208 00:42:27.305241  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.305248  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:27.305253  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:27.305312  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:27.330800  832221 cri.go:89] found id: ""
	I1208 00:42:27.330815  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.330822  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:27.330827  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:27.330914  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:27.357232  832221 cri.go:89] found id: ""
	I1208 00:42:27.357246  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.357253  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:27.357258  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:27.357314  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:27.385173  832221 cri.go:89] found id: ""
	I1208 00:42:27.385186  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.385193  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:27.385199  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:27.385264  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:27.415410  832221 cri.go:89] found id: ""
	I1208 00:42:27.415423  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.415430  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:27.415435  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:27.415491  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:27.441114  832221 cri.go:89] found id: ""
	I1208 00:42:27.441128  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.441135  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:27.441140  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:27.441204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:27.468819  832221 cri.go:89] found id: ""
	I1208 00:42:27.468833  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.468841  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:27.468849  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:27.468859  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:27.534615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:27.534638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:27.552028  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:27.552044  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:27.617298  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:27.617308  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:27.617318  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:27.685006  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:27.685026  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.213024  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:30.223536  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:30.223597  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:30.252285  832221 cri.go:89] found id: ""
	I1208 00:42:30.252299  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.252306  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:30.252311  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:30.252378  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:30.283908  832221 cri.go:89] found id: ""
	I1208 00:42:30.283922  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.283931  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:30.283936  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:30.283994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:30.318884  832221 cri.go:89] found id: ""
	I1208 00:42:30.318899  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.318906  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:30.318912  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:30.318968  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:30.349060  832221 cri.go:89] found id: ""
	I1208 00:42:30.349075  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.349082  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:30.349088  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:30.349164  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:30.376813  832221 cri.go:89] found id: ""
	I1208 00:42:30.376829  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.376837  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:30.376842  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:30.376901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:30.404729  832221 cri.go:89] found id: ""
	I1208 00:42:30.404744  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.404750  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:30.404756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:30.404819  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:30.431212  832221 cri.go:89] found id: ""
	I1208 00:42:30.431226  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.431233  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:30.431241  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:30.431251  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:30.498900  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:30.498911  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:30.498921  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:30.567676  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:30.567699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.596733  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:30.596749  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:30.662190  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:30.662211  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:33.179806  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:33.190715  832221 kubeadm.go:602] duration metric: took 4m2.701897978s to restartPrimaryControlPlane
	W1208 00:42:33.190784  832221 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:42:33.190886  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:42:33.600155  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:42:33.612954  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:42:33.620726  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:42:33.620779  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:42:33.628462  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:42:33.628471  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:42:33.628522  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:42:33.636365  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:42:33.636420  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:42:33.643722  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:42:33.651305  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:42:33.651360  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:42:33.658707  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.666176  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:42:33.666232  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.673523  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:42:33.681031  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:42:33.681086  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:42:33.688609  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:42:33.724887  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:42:33.724941  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:42:33.797997  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:42:33.798062  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:42:33.798096  832221 kubeadm.go:319] OS: Linux
	I1208 00:42:33.798139  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:42:33.798186  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:42:33.798232  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:42:33.798279  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:42:33.798325  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:42:33.798372  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:42:33.798416  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:42:33.798462  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:42:33.798507  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:42:33.859952  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:42:33.860071  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:42:33.860170  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:42:33.868067  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:42:33.869917  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:42:33.869999  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:42:33.870063  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:42:33.870137  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:42:33.870197  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:42:33.870265  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:42:33.870368  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:42:33.870448  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:42:33.870928  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:42:33.871217  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:42:33.871538  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:42:33.871740  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:42:33.871797  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:42:34.028121  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:42:34.367427  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:42:34.702083  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:42:35.025762  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:42:35.511131  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:42:35.511826  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:42:35.514836  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:42:35.516409  832221 out.go:252]   - Booting up control plane ...
	I1208 00:42:35.516507  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:42:35.516848  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:42:35.519384  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:42:35.533955  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:42:35.534084  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:42:35.541753  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:42:35.542016  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:42:35.542213  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:42:35.674531  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:42:35.674638  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:46:35.675373  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115059s
	I1208 00:46:35.675397  832221 kubeadm.go:319] 
	I1208 00:46:35.675450  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:46:35.675480  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:46:35.675578  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:46:35.675582  832221 kubeadm.go:319] 
	I1208 00:46:35.675680  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:46:35.675709  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:46:35.675738  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:46:35.675741  832221 kubeadm.go:319] 
	I1208 00:46:35.680376  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:46:35.680807  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:46:35.680915  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:46:35.681162  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:46:35.681167  832221 kubeadm.go:319] 
	I1208 00:46:35.681238  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:46:35.681347  832221 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115059s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:46:35.681436  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:46:36.099633  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:46:36.112518  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:46:36.112573  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:46:36.120714  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:46:36.120723  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:46:36.120772  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:46:36.128165  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:46:36.128218  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:46:36.135603  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:46:36.142958  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:46:36.143011  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:46:36.150557  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.158107  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:46:36.158166  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.165315  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:46:36.172678  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:46:36.172733  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:46:36.179983  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:46:36.221281  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:46:36.221576  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:46:36.304904  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:46:36.304971  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:46:36.305006  832221 kubeadm.go:319] OS: Linux
	I1208 00:46:36.305062  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:46:36.305109  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:46:36.305154  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:46:36.305201  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:46:36.305247  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:46:36.305299  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:46:36.305343  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:46:36.305391  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:46:36.305437  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:46:36.375885  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:46:36.375986  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:46:36.376075  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:46:36.387291  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:46:36.389104  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:46:36.389182  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:46:36.389272  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:46:36.389371  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:46:36.389436  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:46:36.389506  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:46:36.389559  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:46:36.389626  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:46:36.389691  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:46:36.389770  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:46:36.389858  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:46:36.389893  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:46:36.389946  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:46:37.029886  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:46:37.175943  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:46:37.229666  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:46:37.386162  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:46:37.721262  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:46:37.722365  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:46:37.726361  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:46:37.727820  832221 out.go:252]   - Booting up control plane ...
	I1208 00:46:37.727919  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:46:37.727991  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:46:37.728873  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:46:37.743822  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:46:37.744021  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:46:37.751812  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:46:37.751899  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:46:37.751935  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:46:37.878966  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:46:37.879079  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:50:37.879778  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187421s
	I1208 00:50:37.879803  832221 kubeadm.go:319] 
	I1208 00:50:37.879860  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:50:37.879893  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:50:37.879997  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:50:37.880002  832221 kubeadm.go:319] 
	I1208 00:50:37.880106  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:50:37.880137  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:50:37.880167  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:50:37.880170  832221 kubeadm.go:319] 
	I1208 00:50:37.885162  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:50:37.885617  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:50:37.885748  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:50:37.886002  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:50:37.886010  832221 kubeadm.go:319] 
	I1208 00:50:37.886091  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:50:37.886152  832221 kubeadm.go:403] duration metric: took 12m7.43140026s to StartCluster
	I1208 00:50:37.886198  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:50:37.886263  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:50:37.913929  832221 cri.go:89] found id: ""
	I1208 00:50:37.913943  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.913950  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:50:37.913956  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:50:37.914018  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:50:37.940084  832221 cri.go:89] found id: ""
	I1208 00:50:37.940099  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.940106  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:50:37.940111  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:50:37.940168  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:50:37.965369  832221 cri.go:89] found id: ""
	I1208 00:50:37.965385  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.965392  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:50:37.965397  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:50:37.965454  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:50:37.991902  832221 cri.go:89] found id: ""
	I1208 00:50:37.991916  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.991923  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:50:37.991929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:50:37.991989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:50:38.041593  832221 cri.go:89] found id: ""
	I1208 00:50:38.041607  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.041614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:50:38.041619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:50:38.041681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:50:38.082440  832221 cri.go:89] found id: ""
	I1208 00:50:38.082454  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.082461  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:50:38.082467  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:50:38.082527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:50:38.108776  832221 cri.go:89] found id: ""
	I1208 00:50:38.108794  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.108804  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:50:38.108813  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:50:38.108827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:50:38.179358  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:50:38.179368  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:50:38.179379  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:50:38.249264  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:50:38.249284  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:50:38.283297  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:50:38.283313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:50:38.352336  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:50:38.352356  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 00:50:38.370094  832221 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:50:38.370135  832221 out.go:285] * 
	W1208 00:50:38.370244  832221 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.370347  832221 out.go:285] * 
	W1208 00:50:38.372671  832221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:50:38.375987  832221 out.go:203] 
	W1208 00:50:38.377331  832221 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.377432  832221 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:50:38.377486  832221 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:50:38.378650  832221 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976141949Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976389032Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976505948Z" level=info msg="Create NRI interface"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976728531Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976803559Z" level=info msg="runtime interface created"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976871433Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976925095Z" level=info msg="runtime interface starting up..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976975737Z" level=info msg="starting plugins..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.977043373Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.97717112Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:38:28 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.863535575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=86c63571-1518-417d-8c36-88972a10f046 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864340284Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd30f3d8-2e57-4e42-9d38-12f0c72774a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864886538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2294e0c2-3c35-4ad2-b70e-1cf27e140e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865379712Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8bd0e2b4-0a84-462b-a4c0-b4ef6c82ea6b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865907537Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6aa3aa31-43f2-49f4-affe-a3c22725ca07 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.86644149Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ab7db80c-c2d4-4d6c-acf1-db4a7ce32608 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.867005106Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fe935a58-ea6c-4485-86ff-51db887cec2b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:41.657788   21356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:41.658573   21356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:41.660425   21356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:41.661104   21356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:41.662782   21356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:50:41 up  5:32,  0 user,  load average: 0.46, 0.26, 0.45
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:50:38 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 08 00:50:39 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:39 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:39 functional-525396 kubelet[21217]: E1208 00:50:39.565119   21217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:39 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:40 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 08 00:50:40 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:40 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:40 functional-525396 kubelet[21239]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:40 functional-525396 kubelet[21239]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:40 functional-525396 kubelet[21239]: E1208 00:50:40.312067   21239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:40 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:40 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:50:41 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 08 00:50:41 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:41 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:50:41 functional-525396 kubelet[21273]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:41 functional-525396 kubelet[21273]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:50:41 functional-525396 kubelet[21273]: E1208 00:50:41.082772   21273 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:50:41 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:50:41 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (377.970567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-525396 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-525396 apply -f testdata/invalidsvc.yaml: exit status 1 (57.524505ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-525396 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-525396 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-525396 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-525396 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-525396 --alsologtostderr -v=1] stderr:
I1208 00:52:45.825841  849127 out.go:360] Setting OutFile to fd 1 ...
I1208 00:52:45.825974  849127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:45.825985  849127 out.go:374] Setting ErrFile to fd 2...
I1208 00:52:45.825990  849127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:45.826233  849127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:52:45.826477  849127 mustload.go:66] Loading cluster: functional-525396
I1208 00:52:45.826926  849127 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:45.827387  849127 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:52:45.843945  849127 host.go:66] Checking if "functional-525396" exists ...
I1208 00:52:45.844270  849127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:52:45.899875  849127 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.890928904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:52:45.899994  849127 api_server.go:166] Checking apiserver status ...
I1208 00:52:45.900060  849127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1208 00:52:45.900109  849127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:52:45.917267  849127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
W1208 00:52:46.025411  849127 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1208 00:52:46.028698  849127 out.go:179] * The control-plane node functional-525396 apiserver is not running: (state=Stopped)
I1208 00:52:46.031738  849127 out.go:179]   To start a cluster, run: "minikube start -p functional-525396"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (332.843786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-525396 service hello-node --url --format={{.IP}}                                                                                         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ service   │ functional-525396 service hello-node --url                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1              │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh -- ls -la /mount-9p                                                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh cat /mount-9p/test-1765155156307702116                                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh sudo umount -f /mount-9p                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3736246558/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh -- ls -la /mount-9p                                                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh sudo umount -f /mount-9p                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount1 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount3 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount2 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount1                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh findmnt -T /mount2                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh findmnt -T /mount3                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ mount     │ -p functional-525396 --kill=true                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-525396 --alsologtostderr -v=1                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:52:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:52:45.574627  849050 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:52:45.574939  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.574973  849050 out.go:374] Setting ErrFile to fd 2...
	I1208 00:52:45.575000  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.575412  849050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:52:45.575930  849050 out.go:368] Setting JSON to false
	I1208 00:52:45.577075  849050 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20098,"bootTime":1765135068,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:52:45.577197  849050 start.go:143] virtualization:  
	I1208 00:52:45.581599  849050 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:52:45.584680  849050 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:52:45.584765  849050 notify.go:221] Checking for updates...
	I1208 00:52:45.590612  849050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:52:45.593456  849050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:52:45.596411  849050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:52:45.599251  849050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:52:45.602027  849050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:52:45.605459  849050 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:52:45.606098  849050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:52:45.639100  849050 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:52:45.639273  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.705725  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.696388169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.705834  849050 docker.go:319] overlay module found
	I1208 00:52:45.708920  849050 out.go:179] * Using the docker driver based on existing profile
	I1208 00:52:45.711815  849050 start.go:309] selected driver: docker
	I1208 00:52:45.711841  849050 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.711946  849050 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:52:45.712065  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.768465  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.759533195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.768916  849050 cni.go:84] Creating CNI manager for ""
	I1208 00:52:45.768986  849050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:52:45.769029  849050 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.771959  849050 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976141949Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976389032Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976505948Z" level=info msg="Create NRI interface"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976728531Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976803559Z" level=info msg="runtime interface created"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976871433Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976925095Z" level=info msg="runtime interface starting up..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976975737Z" level=info msg="starting plugins..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.977043373Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.97717112Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:38:28 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.863535575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=86c63571-1518-417d-8c36-88972a10f046 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864340284Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd30f3d8-2e57-4e42-9d38-12f0c72774a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864886538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2294e0c2-3c35-4ad2-b70e-1cf27e140e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865379712Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8bd0e2b4-0a84-462b-a4c0-b4ef6c82ea6b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865907537Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6aa3aa31-43f2-49f4-affe-a3c22725ca07 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.86644149Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ab7db80c-c2d4-4d6c-acf1-db4a7ce32608 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.867005106Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fe935a58-ea6c-4485-86ff-51db887cec2b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:52:47.115561   23370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:47.116285   23370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:47.117709   23370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:47.118291   23370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:47.119835   23370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:52:47 up  5:34,  0 user,  load average: 0.42, 0.27, 0.43
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:52:44 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:45 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 08 00:52:45 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:45 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:45 functional-525396 kubelet[23253]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:45 functional-525396 kubelet[23253]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:45 functional-525396 kubelet[23253]: E1208 00:52:45.577500   23253 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:45 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:45 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:46 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 08 00:52:46 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:46 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:46 functional-525396 kubelet[23267]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:46 functional-525396 kubelet[23267]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:46 functional-525396 kubelet[23267]: E1208 00:52:46.322045   23267 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:46 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:46 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:47 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 08 00:52:47 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:47 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:47 functional-525396 kubelet[23355]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:47 functional-525396 kubelet[23355]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:47 functional-525396 kubelet[23355]: E1208 00:52:47.070736   23355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:47 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:47 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (315.6308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 status: exit status 2 (333.689119ms)

                                                
                                                
-- stdout --
	functional-525396
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-525396 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (345.44286ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-525396 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 status -o json: exit status 2 (307.427067ms)

                                                
                                                
-- stdout --
	{"Name":"functional-525396","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-525396 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (353.569217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-525396 addons list -o json                                                                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ service │ functional-525396 service list                                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ service │ functional-525396 service list -o json                                                                                                              │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ service │ functional-525396 service --namespace=default --https --url hello-node                                                                              │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ service │ functional-525396 service hello-node --url --format={{.IP}}                                                                                         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ service │ functional-525396 service hello-node --url                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount   │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1              │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh -- ls -la /mount-9p                                                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh cat /mount-9p/test-1765155156307702116                                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh sudo umount -f /mount-9p                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ mount   │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3736246558/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh -- ls -la /mount-9p                                                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh sudo umount -f /mount-9p                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount   │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount1 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount   │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount3 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount   │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount2 --alsologtostderr -v=1                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh     │ functional-525396 ssh findmnt -T /mount1                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh findmnt -T /mount2                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh     │ functional-525396 ssh findmnt -T /mount3                                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ mount   │ -p functional-525396 --kill=true                                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:38:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:38:25.865142  832221 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:38:25.865266  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865270  832221 out.go:374] Setting ErrFile to fd 2...
	I1208 00:38:25.865273  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865522  832221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:38:25.865905  832221 out.go:368] Setting JSON to false
	I1208 00:38:25.866798  832221 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":19238,"bootTime":1765135068,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:38:25.866898  832221 start.go:143] virtualization:  
	I1208 00:38:25.870446  832221 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:38:25.873443  832221 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:38:25.873527  832221 notify.go:221] Checking for updates...
	I1208 00:38:25.877177  832221 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:38:25.880254  832221 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:38:25.883080  832221 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:38:25.885867  832221 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:38:25.888710  832221 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:38:25.892134  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:25.892227  832221 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:38:25.926814  832221 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:38:25.926949  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:25.982933  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:25.973301038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:25.983053  832221 docker.go:319] overlay module found
	I1208 00:38:25.986144  832221 out.go:179] * Using the docker driver based on existing profile
	I1208 00:38:25.988897  832221 start.go:309] selected driver: docker
	I1208 00:38:25.988906  832221 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:25.989004  832221 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:38:25.989104  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:26.085905  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:26.075169003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:26.086340  832221 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:38:26.086364  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:26.086419  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:26.086463  832221 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:26.089599  832221 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:38:26.092632  832221 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:38:26.095593  832221 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:38:26.098465  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:26.098511  832221 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:38:26.098512  832221 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:38:26.098520  832221 cache.go:65] Caching tarball of preloaded images
	I1208 00:38:26.098640  832221 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:38:26.098648  832221 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:38:26.098767  832221 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:38:26.118762  832221 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:38:26.118779  832221 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:38:26.118798  832221 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:38:26.118832  832221 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:38:26.118982  832221 start.go:364] duration metric: took 72.616µs to acquireMachinesLock for "functional-525396"
	I1208 00:38:26.119001  832221 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:38:26.119005  832221 fix.go:54] fixHost starting: 
	I1208 00:38:26.119276  832221 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:38:26.135702  832221 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:38:26.135737  832221 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:38:26.138942  832221 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:38:26.138968  832221 machine.go:94] provisionDockerMachine start ...
	I1208 00:38:26.139048  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.156040  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.156360  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.156366  832221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:38:26.306195  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.306209  832221 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:38:26.306278  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.323547  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.323853  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.323861  832221 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:38:26.483358  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.483423  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.500892  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.501201  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.501214  832221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:38:26.651219  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:38:26.651236  832221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:38:26.651262  832221 ubuntu.go:190] setting up certificates
	I1208 00:38:26.651269  832221 provision.go:84] configureAuth start
	I1208 00:38:26.651330  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:26.668935  832221 provision.go:143] copyHostCerts
	I1208 00:38:26.669007  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:38:26.669020  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:38:26.669092  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:38:26.669226  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:38:26.669232  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:38:26.669258  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:38:26.669316  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:38:26.669319  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:38:26.669351  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:38:26.669396  832221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:38:26.882878  832221 provision.go:177] copyRemoteCerts
	I1208 00:38:26.882932  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:38:26.882976  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.900195  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.008298  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:38:27.026654  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:38:27.044245  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:38:27.061828  832221 provision.go:87] duration metric: took 410.535167ms to configureAuth
	I1208 00:38:27.061847  832221 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:38:27.062049  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:27.062144  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.079069  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:27.079387  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:27.079399  832221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:38:27.403353  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:38:27.403368  832221 machine.go:97] duration metric: took 1.264393629s to provisionDockerMachine
	I1208 00:38:27.403378  832221 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:38:27.403389  832221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:38:27.403457  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:38:27.403520  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.422294  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.531362  832221 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:38:27.534870  832221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:38:27.534888  832221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:38:27.534898  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:38:27.534950  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:38:27.535028  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:38:27.535101  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:38:27.535142  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:38:27.543303  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:27.561264  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:38:27.579215  832221 start.go:296] duration metric: took 175.824145ms for postStartSetup
	I1208 00:38:27.579284  832221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:38:27.579329  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.597098  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.699502  832221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:38:27.703953  832221 fix.go:56] duration metric: took 1.584940995s for fixHost
	I1208 00:38:27.703967  832221 start.go:83] releasing machines lock for "functional-525396", held for 1.584978296s
	I1208 00:38:27.704034  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:27.720794  832221 ssh_runner.go:195] Run: cat /version.json
	I1208 00:38:27.720838  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.721083  832221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:38:27.721126  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.740766  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.744839  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.842382  832221 ssh_runner.go:195] Run: systemctl --version
	I1208 00:38:27.933498  832221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:38:27.969664  832221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:38:27.973926  832221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:38:27.973991  832221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:38:27.981670  832221 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:38:27.981684  832221 start.go:496] detecting cgroup driver to use...
	I1208 00:38:27.981714  832221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:38:27.981757  832221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:38:27.996930  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:38:28.011523  832221 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:38:28.011601  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:38:28.029696  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:38:28.043991  832221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:38:28.162184  832221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:38:28.302345  832221 docker.go:234] disabling docker service ...
	I1208 00:38:28.302409  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:38:28.316944  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:38:28.329323  832221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:38:28.471674  832221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:38:28.594617  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:38:28.607360  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:38:28.621958  832221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:38:28.622014  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.631486  832221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:38:28.631544  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.641093  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.650549  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.660155  832221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:38:28.667958  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.676952  832221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.685235  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.693630  832221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:38:28.701133  832221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:38:28.708624  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:28.814162  832221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:38:28.986282  832221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:38:28.986346  832221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:38:28.991517  832221 start.go:564] Will wait 60s for crictl version
	I1208 00:38:28.991573  832221 ssh_runner.go:195] Run: which crictl
	I1208 00:38:28.995534  832221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:38:29.025912  832221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:38:29.025997  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.062279  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.096298  832221 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:38:29.099065  832221 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:38:29.116028  832221 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:38:29.122672  832221 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:38:29.125488  832221 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:38:29.125636  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:29.125706  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.164815  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.164827  832221 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:38:29.164879  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.195499  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.195511  832221 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:38:29.195518  832221 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:38:29.195647  832221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:38:29.195726  832221 ssh_runner.go:195] Run: crio config
	I1208 00:38:29.250138  832221 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:38:29.250159  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:29.250168  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:29.250181  832221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:38:29.250206  832221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:38:29.250329  832221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:38:29.250397  832221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:38:29.258150  832221 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:38:29.258234  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:38:29.265694  832221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:38:29.278151  832221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:38:29.290865  832221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1208 00:38:29.303277  832221 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:38:29.306745  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:29.413867  832221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:38:29.757020  832221 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:38:29.757040  832221 certs.go:195] generating shared ca certs ...
	I1208 00:38:29.757055  832221 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:38:29.757227  832221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:38:29.757282  832221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:38:29.757288  832221 certs.go:257] generating profile certs ...
	I1208 00:38:29.757406  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:38:29.757463  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:38:29.757516  832221 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:38:29.757642  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:38:29.757680  832221 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:38:29.757687  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:38:29.757715  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:38:29.757753  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:38:29.757774  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:38:29.757826  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:29.761393  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:38:29.783882  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:38:29.803461  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:38:29.822714  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:38:29.839981  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:38:29.857351  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:38:29.874240  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:38:29.890650  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:38:29.906746  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:38:29.924059  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:38:29.940748  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:38:29.958110  832221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:38:29.970093  832221 ssh_runner.go:195] Run: openssl version
	I1208 00:38:29.976075  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.983124  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:38:29.990594  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994143  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994197  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:38:30.038336  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:38:30.048261  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.057929  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:38:30.067406  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072044  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072104  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.114205  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:38:30.122367  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.130206  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:38:30.138222  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142205  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142264  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.188681  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:38:30.197066  832221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:38:30.201256  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:38:30.247635  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:38:30.290467  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:38:30.332415  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:38:30.373141  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:38:30.413979  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:38:30.454763  832221 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:30.454864  832221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:38:30.454938  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.481225  832221 cri.go:89] found id: ""
	I1208 00:38:30.481285  832221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:38:30.488799  832221 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:38:30.488808  832221 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:38:30.488859  832221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:38:30.495821  832221 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.496331  832221 kubeconfig.go:125] found "functional-525396" server: "https://192.168.49.2:8441"
	I1208 00:38:30.497560  832221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:38:30.505232  832221 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:23:53.462513047 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:38:29.298599774 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:38:30.505258  832221 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:38:30.505269  832221 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 00:38:30.505341  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.544576  832221 cri.go:89] found id: ""
	I1208 00:38:30.544636  832221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:38:30.564190  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:38:30.571945  832221 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  8 00:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  8 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  8 00:28 /etc/kubernetes/scheduler.conf
	
	I1208 00:38:30.572003  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:38:30.579767  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:38:30.588961  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.589038  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:38:30.596275  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.604001  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.604058  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.611049  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:38:30.618317  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.618369  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:38:30.625673  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:38:30.633203  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:30.679020  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.303260  832221 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.624214812s)
	I1208 00:38:32.303321  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.499121  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.557405  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.605845  832221 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:38:32.605924  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.106778  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.606873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.106818  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.606134  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.106245  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.607017  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.106011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.606401  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.106569  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.606153  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.106367  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.605995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.106910  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.606698  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.606687  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.106589  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.606067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.106823  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.606794  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.106122  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.606931  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.106765  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.606092  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.107046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.606088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.106757  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.606004  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.106996  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.606590  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.106432  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.106745  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.606390  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.106196  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.606618  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.106064  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.606867  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.106995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.606766  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.106131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.606779  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.106290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.606219  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.106089  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.607007  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.106717  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.106475  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.607046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.106582  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.606125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.107067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.606667  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.106461  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.606353  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.106471  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.606654  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.107110  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.607006  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.106780  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.606382  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.106088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.606332  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.106060  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.106803  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.606107  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.106414  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.606178  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.106868  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.606030  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.106375  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.606102  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.107011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.606304  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.106096  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.606827  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.606893  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.107045  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.606816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.106126  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.606899  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.106572  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.606111  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.606103  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.106801  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.606703  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.106595  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.606139  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.106918  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.606350  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.106147  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.606821  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.106994  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.606129  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.106114  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.606499  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.106132  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.606921  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.106736  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.606121  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.106425  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.606155  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.106763  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.106058  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.606943  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.106991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.606966  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.106181  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.606342  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.106653  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.606117  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.106026  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.606138  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:32.606213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:32.631935  832221 cri.go:89] found id: ""
	I1208 00:39:32.631949  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.631956  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:32.631962  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:32.632027  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:32.657240  832221 cri.go:89] found id: ""
	I1208 00:39:32.657260  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.657267  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:32.657273  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:32.657332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:32.686247  832221 cri.go:89] found id: ""
	I1208 00:39:32.686261  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.686269  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:32.686274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:32.686334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:32.712330  832221 cri.go:89] found id: ""
	I1208 00:39:32.712345  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.712352  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:32.712358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:32.712416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:32.738663  832221 cri.go:89] found id: ""
	I1208 00:39:32.738678  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.738685  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:32.738690  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:32.738755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:32.765710  832221 cri.go:89] found id: ""
	I1208 00:39:32.765725  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.765731  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:32.765737  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:32.765792  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:32.791480  832221 cri.go:89] found id: ""
	I1208 00:39:32.791494  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.791501  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:32.791509  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:32.791520  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:32.856630  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:32.856654  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:32.873574  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:32.873591  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:32.937953  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:32.937966  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:32.937977  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:33.008749  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:33.008776  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.542093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:35.553517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:35.553575  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:35.584212  832221 cri.go:89] found id: ""
	I1208 00:39:35.584226  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.584233  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:35.584238  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:35.584296  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:35.615871  832221 cri.go:89] found id: ""
	I1208 00:39:35.615885  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.615892  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:35.615897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:35.615954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:35.641597  832221 cri.go:89] found id: ""
	I1208 00:39:35.641611  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.641618  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:35.641623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:35.641683  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:35.667538  832221 cri.go:89] found id: ""
	I1208 00:39:35.667551  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.667567  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:35.667572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:35.667633  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:35.696105  832221 cri.go:89] found id: ""
	I1208 00:39:35.696118  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.696124  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:35.696130  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:35.696187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:35.725150  832221 cri.go:89] found id: ""
	I1208 00:39:35.725165  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.725172  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:35.725178  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:35.725236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:35.752762  832221 cri.go:89] found id: ""
	I1208 00:39:35.752776  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.752783  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:35.752791  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:35.752801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.780454  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:35.780471  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:35.846096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:35.846118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:35.863081  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:35.863098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:35.932235  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:35.932246  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:35.932259  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.502146  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:38.514634  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:38.514691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:38.548208  832221 cri.go:89] found id: ""
	I1208 00:39:38.548223  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.548230  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:38.548235  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:38.548305  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:38.579066  832221 cri.go:89] found id: ""
	I1208 00:39:38.579080  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.579087  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:38.579092  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:38.579154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:38.605928  832221 cri.go:89] found id: ""
	I1208 00:39:38.605942  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.605949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:38.605954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:38.606013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:38.631317  832221 cri.go:89] found id: ""
	I1208 00:39:38.631332  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.631339  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:38.631350  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:38.631410  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:38.657581  832221 cri.go:89] found id: ""
	I1208 00:39:38.657595  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.657602  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:38.657607  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:38.657664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:38.688104  832221 cri.go:89] found id: ""
	I1208 00:39:38.688118  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.688125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:38.688131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:38.688191  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:38.712900  832221 cri.go:89] found id: ""
	I1208 00:39:38.712914  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.712921  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:38.712929  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:38.712939  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.782215  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:38.782236  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:38.813188  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:38.813203  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:38.882554  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:38.882574  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:38.899573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:38.899590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:38.963587  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.464816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:41.476933  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:41.476994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:41.519038  832221 cri.go:89] found id: ""
	I1208 00:39:41.519052  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.519059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:41.519065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:41.519120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:41.549931  832221 cri.go:89] found id: ""
	I1208 00:39:41.549946  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.549953  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:41.549958  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:41.550016  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:41.579952  832221 cri.go:89] found id: ""
	I1208 00:39:41.579966  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.579973  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:41.579978  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:41.580038  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:41.609851  832221 cri.go:89] found id: ""
	I1208 00:39:41.609865  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.609873  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:41.609878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:41.609940  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:41.635896  832221 cri.go:89] found id: ""
	I1208 00:39:41.635910  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.635917  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:41.635923  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:41.635986  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:41.662056  832221 cri.go:89] found id: ""
	I1208 00:39:41.662083  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.662091  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:41.662097  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:41.662170  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:41.687327  832221 cri.go:89] found id: ""
	I1208 00:39:41.687342  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.687349  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:41.687357  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:41.687367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:41.753129  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:41.753148  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:41.769911  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:41.769927  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:41.838088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.838099  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:41.838111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:41.910629  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:41.910651  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:44.440476  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:44.450677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:44.450737  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:44.477661  832221 cri.go:89] found id: ""
	I1208 00:39:44.477674  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.477681  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:44.477687  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:44.477754  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:44.502810  832221 cri.go:89] found id: ""
	I1208 00:39:44.502824  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.502831  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:44.502836  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:44.502922  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:44.536158  832221 cri.go:89] found id: ""
	I1208 00:39:44.536171  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.536178  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:44.536187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:44.536245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:44.569819  832221 cri.go:89] found id: ""
	I1208 00:39:44.569832  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.569839  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:44.569844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:44.569900  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:44.596822  832221 cri.go:89] found id: ""
	I1208 00:39:44.596837  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.596844  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:44.596849  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:44.596909  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:44.626118  832221 cri.go:89] found id: ""
	I1208 00:39:44.626132  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.626139  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:44.626159  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:44.626220  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:44.651327  832221 cri.go:89] found id: ""
	I1208 00:39:44.651341  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.651348  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:44.651356  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:44.651366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:44.717153  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:44.717174  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:44.734169  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:44.734200  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:44.800240  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:44.800252  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:44.800263  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:44.873699  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:44.873729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.404232  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:47.415493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:47.415558  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:47.442934  832221 cri.go:89] found id: ""
	I1208 00:39:47.442948  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.442955  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:47.442961  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:47.443025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:47.468072  832221 cri.go:89] found id: ""
	I1208 00:39:47.468086  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.468093  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:47.468099  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:47.468169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:47.499439  832221 cri.go:89] found id: ""
	I1208 00:39:47.499452  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.499460  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:47.499465  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:47.499522  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:47.525160  832221 cri.go:89] found id: ""
	I1208 00:39:47.525173  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.525180  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:47.525186  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:47.525261  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:47.557881  832221 cri.go:89] found id: ""
	I1208 00:39:47.557902  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.557909  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:47.557915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:47.557973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:47.585993  832221 cri.go:89] found id: ""
	I1208 00:39:47.586006  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.586013  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:47.586018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:47.586074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:47.611544  832221 cri.go:89] found id: ""
	I1208 00:39:47.611559  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.611565  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:47.611573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:47.611594  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:47.673948  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:47.673960  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:47.673971  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:47.746050  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:47.746071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.778206  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:47.778228  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:47.843769  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:47.843788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.361131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:50.373118  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:50.373178  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:50.402177  832221 cri.go:89] found id: ""
	I1208 00:39:50.402192  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.402199  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:50.402204  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:50.402262  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:50.428277  832221 cri.go:89] found id: ""
	I1208 00:39:50.428291  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.428298  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:50.428303  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:50.428361  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:50.453780  832221 cri.go:89] found id: ""
	I1208 00:39:50.453793  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.453801  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:50.453806  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:50.453867  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:50.478816  832221 cri.go:89] found id: ""
	I1208 00:39:50.478830  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.478838  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:50.478887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:50.478952  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:50.506494  832221 cri.go:89] found id: ""
	I1208 00:39:50.506508  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.506516  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:50.506523  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:50.506581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:50.548254  832221 cri.go:89] found id: ""
	I1208 00:39:50.548267  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.548275  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:50.548289  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:50.548345  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:50.580999  832221 cri.go:89] found id: ""
	I1208 00:39:50.581013  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.581020  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:50.581028  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:50.581038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:50.646872  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:50.646894  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.663705  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:50.663722  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:50.731208  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:50.731220  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:50.731231  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:50.800530  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:50.800552  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:53.328838  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:53.338798  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:53.338876  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:53.364078  832221 cri.go:89] found id: ""
	I1208 00:39:53.364093  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.364100  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:53.364106  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:53.364165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:53.389870  832221 cri.go:89] found id: ""
	I1208 00:39:53.389884  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.389891  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:53.389897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:53.389955  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:53.415578  832221 cri.go:89] found id: ""
	I1208 00:39:53.415592  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.415600  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:53.415606  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:53.415664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:53.440749  832221 cri.go:89] found id: ""
	I1208 00:39:53.440763  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.440769  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:53.440775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:53.440837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:53.469528  832221 cri.go:89] found id: ""
	I1208 00:39:53.469542  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.469550  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:53.469555  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:53.469614  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:53.494205  832221 cri.go:89] found id: ""
	I1208 00:39:53.494219  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.494225  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:53.494231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:53.494286  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:53.536734  832221 cri.go:89] found id: ""
	I1208 00:39:53.536748  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.536755  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:53.536763  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:53.536773  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:53.608590  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:53.608610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:53.625117  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:53.625134  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:53.687237  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:53.687248  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:53.687258  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:53.755459  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:53.755480  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.290756  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:56.302211  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:56.302272  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:56.327085  832221 cri.go:89] found id: ""
	I1208 00:39:56.327098  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.327105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:56.327110  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:56.327165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:56.351553  832221 cri.go:89] found id: ""
	I1208 00:39:56.351567  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.351574  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:56.351579  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:56.351636  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:56.375432  832221 cri.go:89] found id: ""
	I1208 00:39:56.375445  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.375451  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:56.375456  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:56.375513  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:56.399254  832221 cri.go:89] found id: ""
	I1208 00:39:56.399267  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.399274  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:56.399282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:56.399337  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:56.424239  832221 cri.go:89] found id: ""
	I1208 00:39:56.424253  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.424260  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:56.424265  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:56.424322  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:56.447970  832221 cri.go:89] found id: ""
	I1208 00:39:56.447983  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.447990  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:56.447996  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:56.448059  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:56.480639  832221 cri.go:89] found id: ""
	I1208 00:39:56.480652  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.480659  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:56.480666  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:56.480680  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.514333  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:56.514349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:56.587248  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:56.587268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:56.604138  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:56.604156  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:56.667583  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:56.667593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:56.667605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.236478  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:59.246590  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:59.246653  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:59.274726  832221 cri.go:89] found id: ""
	I1208 00:39:59.274739  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.274746  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:59.274752  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:59.274816  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:59.302946  832221 cri.go:89] found id: ""
	I1208 00:39:59.302960  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.302967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:59.302972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:59.303036  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:59.328486  832221 cri.go:89] found id: ""
	I1208 00:39:59.328510  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.328517  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:59.328522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:59.328583  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:59.354620  832221 cri.go:89] found id: ""
	I1208 00:39:59.354638  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.354645  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:59.354651  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:59.354722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:59.379131  832221 cri.go:89] found id: ""
	I1208 00:39:59.379145  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.379152  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:59.379157  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:59.379221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:59.407900  832221 cri.go:89] found id: ""
	I1208 00:39:59.407915  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.407921  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:59.407930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:59.407999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:59.432790  832221 cri.go:89] found id: ""
	I1208 00:39:59.432804  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.432811  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:59.432819  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:59.432829  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:59.498500  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:59.498521  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:59.517843  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:59.517860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:59.592346  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:59.592356  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:59.592366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.660798  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:59.660821  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.193318  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:02.204389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:02.204452  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:02.233248  832221 cri.go:89] found id: ""
	I1208 00:40:02.233262  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.233272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:02.233277  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:02.233338  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:02.259542  832221 cri.go:89] found id: ""
	I1208 00:40:02.259555  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.259562  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:02.259567  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:02.259626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:02.284406  832221 cri.go:89] found id: ""
	I1208 00:40:02.284421  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.284428  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:02.284433  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:02.284492  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:02.314792  832221 cri.go:89] found id: ""
	I1208 00:40:02.314807  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.314815  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:02.314820  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:02.314902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:02.345720  832221 cri.go:89] found id: ""
	I1208 00:40:02.345735  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.345742  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:02.345748  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:02.345806  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:02.374260  832221 cri.go:89] found id: ""
	I1208 00:40:02.374275  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.374282  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:02.374288  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:02.374356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:02.401424  832221 cri.go:89] found id: ""
	I1208 00:40:02.401448  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.401456  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:02.401464  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:02.401477  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:02.418749  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:02.418772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:02.488580  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:02.488593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:02.488605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:02.561942  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:02.561963  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.594984  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:02.595001  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.164061  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:05.174102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:05.174162  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:05.200676  832221 cri.go:89] found id: ""
	I1208 00:40:05.200690  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.200697  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:05.200702  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:05.200762  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:05.229843  832221 cri.go:89] found id: ""
	I1208 00:40:05.229857  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.229864  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:05.229869  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:05.229923  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:05.254905  832221 cri.go:89] found id: ""
	I1208 00:40:05.254919  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.254926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:05.254930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:05.254989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:05.284106  832221 cri.go:89] found id: ""
	I1208 00:40:05.284120  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.284127  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:05.284132  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:05.284197  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:05.308626  832221 cri.go:89] found id: ""
	I1208 00:40:05.308640  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.308647  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:05.308652  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:05.308714  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:05.337161  832221 cri.go:89] found id: ""
	I1208 00:40:05.337175  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.337182  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:05.337187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:05.337268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:05.362077  832221 cri.go:89] found id: ""
	I1208 00:40:05.362091  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.362098  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:05.362105  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:05.362116  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.428096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:05.428115  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:05.445139  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:05.445161  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:05.507290  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:05.507310  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:05.507321  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:05.586340  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:05.586361  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.118998  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:08.129512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:08.129588  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:08.156251  832221 cri.go:89] found id: ""
	I1208 00:40:08.156265  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.156272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:08.156278  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:08.156344  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:08.183906  832221 cri.go:89] found id: ""
	I1208 00:40:08.183919  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.183926  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:08.183931  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:08.183987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:08.210358  832221 cri.go:89] found id: ""
	I1208 00:40:08.210372  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.210379  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:08.210384  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:08.210442  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:08.235462  832221 cri.go:89] found id: ""
	I1208 00:40:08.235476  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.235483  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:08.235489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:08.235544  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:08.261687  832221 cri.go:89] found id: ""
	I1208 00:40:08.261700  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.261707  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:08.261713  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:08.261771  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:08.285826  832221 cri.go:89] found id: ""
	I1208 00:40:08.285842  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.285849  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:08.285854  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:08.285912  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:08.312132  832221 cri.go:89] found id: ""
	I1208 00:40:08.312146  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.312153  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:08.312161  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:08.312171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:08.380160  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:08.380177  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:08.380187  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:08.455282  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:08.455305  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.490186  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:08.490207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:08.563751  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:08.563779  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.082398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:11.092581  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:11.092642  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:11.118553  832221 cri.go:89] found id: ""
	I1208 00:40:11.118568  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.118575  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:11.118580  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:11.118638  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:11.144055  832221 cri.go:89] found id: ""
	I1208 00:40:11.144070  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.144077  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:11.144082  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:11.144144  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:11.169906  832221 cri.go:89] found id: ""
	I1208 00:40:11.169919  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.169926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:11.169931  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:11.169988  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:11.197596  832221 cri.go:89] found id: ""
	I1208 00:40:11.197610  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.197617  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:11.197623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:11.197681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:11.223606  832221 cri.go:89] found id: ""
	I1208 00:40:11.223624  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.223631  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:11.223636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:11.223693  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:11.248818  832221 cri.go:89] found id: ""
	I1208 00:40:11.248832  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.248838  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:11.248844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:11.248902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:11.273540  832221 cri.go:89] found id: ""
	I1208 00:40:11.273554  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.273561  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:11.273568  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:11.273579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:11.338706  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:11.338726  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.357554  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:11.357571  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:11.420756  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:11.420767  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:11.420788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:11.489139  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:11.489157  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.024714  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:14.035808  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:14.035873  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:14.061793  832221 cri.go:89] found id: ""
	I1208 00:40:14.061807  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.061814  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:14.061819  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:14.061875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:14.090633  832221 cri.go:89] found id: ""
	I1208 00:40:14.090647  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.090654  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:14.090661  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:14.090719  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:14.115546  832221 cri.go:89] found id: ""
	I1208 00:40:14.115560  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.115567  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:14.115572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:14.115629  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:14.141065  832221 cri.go:89] found id: ""
	I1208 00:40:14.141079  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.141086  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:14.141091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:14.141154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:14.165799  832221 cri.go:89] found id: ""
	I1208 00:40:14.165814  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.165821  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:14.165826  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:14.165886  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:14.195480  832221 cri.go:89] found id: ""
	I1208 00:40:14.195494  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.195501  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:14.195506  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:14.195564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:14.220362  832221 cri.go:89] found id: ""
	I1208 00:40:14.220377  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.220384  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:14.220392  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:14.220405  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:14.287292  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:14.287303  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:14.287313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:14.356018  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:14.356038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.387237  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:14.387253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:14.454492  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:14.454512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:16.972125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:16.982309  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:16.982372  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:17.017693  832221 cri.go:89] found id: ""
	I1208 00:40:17.017706  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.017714  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:17.017719  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:17.017778  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:17.044376  832221 cri.go:89] found id: ""
	I1208 00:40:17.044391  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.044399  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:17.044404  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:17.044473  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:17.070587  832221 cri.go:89] found id: ""
	I1208 00:40:17.070601  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.070608  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:17.070613  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:17.070672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:17.095978  832221 cri.go:89] found id: ""
	I1208 00:40:17.095992  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.095999  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:17.096004  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:17.096062  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:17.122135  832221 cri.go:89] found id: ""
	I1208 00:40:17.122149  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.122156  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:17.122161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:17.122221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:17.148103  832221 cri.go:89] found id: ""
	I1208 00:40:17.148118  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.148125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:17.148131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:17.148192  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:17.172943  832221 cri.go:89] found id: ""
	I1208 00:40:17.172957  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.172964  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:17.172971  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:17.172982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:17.238368  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:17.238387  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:17.255667  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:17.255685  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:17.321644  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:17.321656  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:17.321667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:17.394476  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:17.394498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:19.927345  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:19.939629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:19.939691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:19.965406  832221 cri.go:89] found id: ""
	I1208 00:40:19.965420  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.965427  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:19.965432  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:19.965500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:19.992009  832221 cri.go:89] found id: ""
	I1208 00:40:19.992023  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.992030  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:19.992035  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:19.992098  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:20.029302  832221 cri.go:89] found id: ""
	I1208 00:40:20.029317  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.029324  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:20.029330  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:20.029399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:20.058056  832221 cri.go:89] found id: ""
	I1208 00:40:20.058071  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.058085  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:20.058091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:20.058165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:20.084189  832221 cri.go:89] found id: ""
	I1208 00:40:20.084203  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.084211  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:20.084216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:20.084291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:20.111361  832221 cri.go:89] found id: ""
	I1208 00:40:20.111376  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.111383  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:20.111389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:20.111449  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:20.141805  832221 cri.go:89] found id: ""
	I1208 00:40:20.141819  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.141826  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:20.141834  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:20.141844  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:20.169490  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:20.169506  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:20.234965  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:20.234985  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:20.252060  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:20.252078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:20.320257  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:20.320267  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:20.320280  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:22.888858  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:22.899382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:22.899447  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:22.924604  832221 cri.go:89] found id: ""
	I1208 00:40:22.924619  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.924625  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:22.924631  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:22.924698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:22.955239  832221 cri.go:89] found id: ""
	I1208 00:40:22.955253  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.955259  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:22.955264  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:22.955323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:22.981222  832221 cri.go:89] found id: ""
	I1208 00:40:22.981237  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.981244  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:22.981250  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:22.981317  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:23.011070  832221 cri.go:89] found id: ""
	I1208 00:40:23.011085  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.011092  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:23.011098  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:23.011169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:23.038240  832221 cri.go:89] found id: ""
	I1208 00:40:23.038255  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.038263  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:23.038268  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:23.038329  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:23.068452  832221 cri.go:89] found id: ""
	I1208 00:40:23.068466  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.068473  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:23.068479  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:23.068536  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:23.094006  832221 cri.go:89] found id: ""
	I1208 00:40:23.094020  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.094027  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:23.094035  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:23.094047  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:23.160498  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:23.160517  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:23.177630  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:23.177647  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:23.241245  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:23.241256  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:23.241268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:23.310140  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:23.310159  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:25.838645  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:25.849038  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:25.849104  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:25.876484  832221 cri.go:89] found id: ""
	I1208 00:40:25.876499  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.876506  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:25.876512  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:25.876574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:25.906565  832221 cri.go:89] found id: ""
	I1208 00:40:25.906579  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.906587  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:25.906592  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:25.906649  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:25.937448  832221 cri.go:89] found id: ""
	I1208 00:40:25.937463  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.937471  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:25.937476  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:25.937537  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:25.966528  832221 cri.go:89] found id: ""
	I1208 00:40:25.966542  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.966549  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:25.966554  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:25.966609  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:25.993465  832221 cri.go:89] found id: ""
	I1208 00:40:25.993480  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.993487  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:25.993493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:25.993554  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:26.022155  832221 cri.go:89] found id: ""
	I1208 00:40:26.022168  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.022175  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:26.022181  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:26.022239  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:26.049049  832221 cri.go:89] found id: ""
	I1208 00:40:26.049064  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.049072  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:26.049087  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:26.049098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:26.119386  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:26.119406  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:26.155712  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:26.155729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:26.223788  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:26.223809  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:26.245587  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:26.245610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:26.309129  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:28.809355  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:28.819547  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:28.819610  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:28.849672  832221 cri.go:89] found id: ""
	I1208 00:40:28.849687  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.849694  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:28.849700  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:28.849760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:28.880748  832221 cri.go:89] found id: ""
	I1208 00:40:28.880763  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.880769  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:28.880774  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:28.880837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:28.908198  832221 cri.go:89] found id: ""
	I1208 00:40:28.908212  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.908219  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:28.908224  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:28.908282  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:28.933130  832221 cri.go:89] found id: ""
	I1208 00:40:28.933144  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.933151  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:28.933156  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:28.933222  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:28.964126  832221 cri.go:89] found id: ""
	I1208 00:40:28.964140  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.964147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:28.964153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:28.964210  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:28.990484  832221 cri.go:89] found id: ""
	I1208 00:40:28.990499  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.990506  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:28.990512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:28.990573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:29.017806  832221 cri.go:89] found id: ""
	I1208 00:40:29.017820  832221 logs.go:282] 0 containers: []
	W1208 00:40:29.017828  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:29.017835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:29.017847  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:29.084613  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:29.084635  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:29.101973  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:29.101992  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:29.173921  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:29.173933  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:29.173944  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:29.240893  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:29.240915  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:31.777057  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:31.790721  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:31.790788  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:31.822768  832221 cri.go:89] found id: ""
	I1208 00:40:31.822783  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.822790  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:31.822795  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:31.822969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:31.848644  832221 cri.go:89] found id: ""
	I1208 00:40:31.848657  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.848672  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:31.848678  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:31.848745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:31.874088  832221 cri.go:89] found id: ""
	I1208 00:40:31.874101  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.874117  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:31.874123  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:31.874179  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:31.899211  832221 cri.go:89] found id: ""
	I1208 00:40:31.899234  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.899242  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:31.899247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:31.899316  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:31.924268  832221 cri.go:89] found id: ""
	I1208 00:40:31.924282  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.924290  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:31.924295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:31.924355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:31.950349  832221 cri.go:89] found id: ""
	I1208 00:40:31.950363  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.950370  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:31.950376  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:31.950433  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:31.979825  832221 cri.go:89] found id: ""
	I1208 00:40:31.979848  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.979856  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:31.979864  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:31.979875  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:32.045728  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:32.045748  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:32.062977  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:32.062995  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:32.127567  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:32.127579  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:32.127590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:32.195761  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:32.195782  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:34.725887  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:34.742661  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:34.742722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:34.778651  832221 cri.go:89] found id: ""
	I1208 00:40:34.778665  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.778672  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:34.778678  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:34.778736  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:34.811974  832221 cri.go:89] found id: ""
	I1208 00:40:34.811988  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.811995  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:34.812000  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:34.812057  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:34.844697  832221 cri.go:89] found id: ""
	I1208 00:40:34.844712  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.844719  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:34.844725  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:34.844782  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:34.872482  832221 cri.go:89] found id: ""
	I1208 00:40:34.872495  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.872502  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:34.872509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:34.872564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:34.898220  832221 cri.go:89] found id: ""
	I1208 00:40:34.898235  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.898242  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:34.898247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:34.898308  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:34.925442  832221 cri.go:89] found id: ""
	I1208 00:40:34.925457  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.925464  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:34.925470  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:34.925527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:34.952326  832221 cri.go:89] found id: ""
	I1208 00:40:34.952340  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.952347  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:34.952355  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:34.952367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:35.018286  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:35.018308  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:35.036568  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:35.036588  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:35.105378  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:35.105389  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:35.105403  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:35.175887  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:35.175909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:37.712873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:37.722837  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:37.722915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:37.748671  832221 cri.go:89] found id: ""
	I1208 00:40:37.748684  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.748691  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:37.748697  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:37.748760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:37.787454  832221 cri.go:89] found id: ""
	I1208 00:40:37.787467  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.787475  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:37.787479  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:37.787540  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:37.827928  832221 cri.go:89] found id: ""
	I1208 00:40:37.827942  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.827949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:37.827954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:37.828015  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:37.853248  832221 cri.go:89] found id: ""
	I1208 00:40:37.853261  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.853268  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:37.853274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:37.853333  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:37.881771  832221 cri.go:89] found id: ""
	I1208 00:40:37.881785  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.881792  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:37.881797  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:37.881862  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:37.908845  832221 cri.go:89] found id: ""
	I1208 00:40:37.908858  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.908864  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:37.908870  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:37.908927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:37.933663  832221 cri.go:89] found id: ""
	I1208 00:40:37.933676  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.933684  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:37.933691  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:37.933702  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:37.950237  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:37.950253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:38.015251  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:38.015261  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:38.015272  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:38.086877  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:38.086899  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:38.120835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:38.120851  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:40.690876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:40.701698  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:40.701757  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:40.728919  832221 cri.go:89] found id: ""
	I1208 00:40:40.728933  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.728944  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:40.728950  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:40.729006  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:40.756412  832221 cri.go:89] found id: ""
	I1208 00:40:40.756426  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.756433  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:40.756438  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:40.756496  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:40.785209  832221 cri.go:89] found id: ""
	I1208 00:40:40.785223  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.785230  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:40.785235  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:40.785293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:40.812803  832221 cri.go:89] found id: ""
	I1208 00:40:40.812816  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.812823  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:40.812828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:40.812884  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:40.841663  832221 cri.go:89] found id: ""
	I1208 00:40:40.841676  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.841683  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:40.841688  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:40.841745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:40.867267  832221 cri.go:89] found id: ""
	I1208 00:40:40.867281  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.867298  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:40.867304  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:40.867365  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:40.896639  832221 cri.go:89] found id: ""
	I1208 00:40:40.896652  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.896661  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:40.896668  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:40.896678  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:40.960376  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:40.960386  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:40.960397  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:41.032818  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:41.032839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.062752  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:41.062771  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:41.130656  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:41.130676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.649290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:43.659339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:43.659404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:43.685304  832221 cri.go:89] found id: ""
	I1208 00:40:43.685319  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.685326  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:43.685332  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:43.685394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:43.710805  832221 cri.go:89] found id: ""
	I1208 00:40:43.710820  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.710827  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:43.710856  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:43.710933  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:43.735910  832221 cri.go:89] found id: ""
	I1208 00:40:43.735923  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.735930  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:43.735936  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:43.735994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:43.776908  832221 cri.go:89] found id: ""
	I1208 00:40:43.776921  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.776928  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:43.776934  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:43.776997  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:43.809711  832221 cri.go:89] found id: ""
	I1208 00:40:43.809724  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.809731  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:43.809736  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:43.809794  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:43.838996  832221 cri.go:89] found id: ""
	I1208 00:40:43.839009  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.839016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:43.839022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:43.839087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:43.864075  832221 cri.go:89] found id: ""
	I1208 00:40:43.864088  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.864095  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:43.864103  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:43.864120  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:43.930430  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:43.930449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.948281  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:43.948301  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:44.016438  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:44.016448  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:44.016462  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:44.087788  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:44.087808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.619014  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:46.629647  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:46.629711  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:46.655337  832221 cri.go:89] found id: ""
	I1208 00:40:46.655352  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.655360  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:46.655365  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:46.655426  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:46.685122  832221 cri.go:89] found id: ""
	I1208 00:40:46.685137  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.685145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:46.685150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:46.685218  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:46.711647  832221 cri.go:89] found id: ""
	I1208 00:40:46.711661  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.711669  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:46.711674  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:46.711739  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:46.739056  832221 cri.go:89] found id: ""
	I1208 00:40:46.739070  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.739077  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:46.739082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:46.739138  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:46.777014  832221 cri.go:89] found id: ""
	I1208 00:40:46.777040  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.777047  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:46.777053  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:46.777120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:46.821392  832221 cri.go:89] found id: ""
	I1208 00:40:46.821407  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.821414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:46.821419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:46.821481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:46.847683  832221 cri.go:89] found id: ""
	I1208 00:40:46.847706  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.847714  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:46.847722  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:46.847735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.880771  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:46.880787  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:46.946188  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:46.946208  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:46.965130  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:46.965147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:47.035809  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:47.035820  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:47.035843  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.603876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:49.614271  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:49.614332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:49.640814  832221 cri.go:89] found id: ""
	I1208 00:40:49.640827  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.640834  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:49.640840  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:49.640898  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:49.670323  832221 cri.go:89] found id: ""
	I1208 00:40:49.670337  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.670345  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:49.670351  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:49.670409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:49.696270  832221 cri.go:89] found id: ""
	I1208 00:40:49.696284  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.696290  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:49.696295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:49.696353  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:49.725434  832221 cri.go:89] found id: ""
	I1208 00:40:49.725448  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.725454  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:49.725468  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:49.725525  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:49.760362  832221 cri.go:89] found id: ""
	I1208 00:40:49.760375  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.760382  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:49.760393  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:49.760450  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:49.789531  832221 cri.go:89] found id: ""
	I1208 00:40:49.789545  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.789552  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:49.789567  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:49.789637  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:49.818353  832221 cri.go:89] found id: ""
	I1208 00:40:49.818367  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.818374  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:49.818390  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:49.818401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.890934  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:49.890956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:49.919198  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:49.919214  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:49.988173  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:49.988194  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:50.007229  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:50.007249  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:50.081725  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.581991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:52.592775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:52.592847  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:52.619761  832221 cri.go:89] found id: ""
	I1208 00:40:52.619775  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.619782  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:52.619788  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:52.619853  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:52.647647  832221 cri.go:89] found id: ""
	I1208 00:40:52.647662  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.647669  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:52.647674  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:52.647761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:52.673131  832221 cri.go:89] found id: ""
	I1208 00:40:52.673145  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.673152  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:52.673161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:52.673228  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:52.699525  832221 cri.go:89] found id: ""
	I1208 00:40:52.699540  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.699547  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:52.699553  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:52.699620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:52.725467  832221 cri.go:89] found id: ""
	I1208 00:40:52.725482  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.725489  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:52.725494  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:52.725556  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:52.756767  832221 cri.go:89] found id: ""
	I1208 00:40:52.756782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.756790  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:52.756796  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:52.756855  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:52.787768  832221 cri.go:89] found id: ""
	I1208 00:40:52.787782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.787790  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:52.787797  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:52.787808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:52.817811  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:52.817827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:52.889380  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:52.889401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:52.906939  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:52.906956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:52.971866  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.971876  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:52.971889  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.544702  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:55.554800  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:55.554875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:55.581294  832221 cri.go:89] found id: ""
	I1208 00:40:55.581309  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.581316  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:55.581321  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:55.581384  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:55.609189  832221 cri.go:89] found id: ""
	I1208 00:40:55.609210  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.609217  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:55.609222  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:55.609281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:55.636121  832221 cri.go:89] found id: ""
	I1208 00:40:55.636135  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.636142  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:55.636147  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:55.636212  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:55.661670  832221 cri.go:89] found id: ""
	I1208 00:40:55.661684  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.661691  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:55.661697  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:55.661756  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:55.687332  832221 cri.go:89] found id: ""
	I1208 00:40:55.687345  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.687352  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:55.687358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:55.687416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:55.713054  832221 cri.go:89] found id: ""
	I1208 00:40:55.713069  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.713076  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:55.713082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:55.713140  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:55.742979  832221 cri.go:89] found id: ""
	I1208 00:40:55.742993  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.743000  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:55.743008  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:55.743019  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:55.761280  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:55.761297  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:55.838925  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:55.838936  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:55.838949  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.910195  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:55.910218  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:55.940346  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:55.940364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.509357  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:58.519836  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:58.519901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:58.545859  832221 cri.go:89] found id: ""
	I1208 00:40:58.545874  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.545881  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:58.545887  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:58.545948  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:58.575589  832221 cri.go:89] found id: ""
	I1208 00:40:58.575603  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.575609  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:58.575614  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:58.575672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:58.604890  832221 cri.go:89] found id: ""
	I1208 00:40:58.604905  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.604911  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:58.604917  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:58.604974  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:58.630992  832221 cri.go:89] found id: ""
	I1208 00:40:58.631006  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.631013  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:58.631018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:58.631075  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:58.656862  832221 cri.go:89] found id: ""
	I1208 00:40:58.656875  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.656882  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:58.656887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:58.656950  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:58.693729  832221 cri.go:89] found id: ""
	I1208 00:40:58.693744  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.693751  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:58.693756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:58.693815  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:58.719999  832221 cri.go:89] found id: ""
	I1208 00:40:58.720014  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.720021  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:58.720029  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:58.720040  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.787457  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:58.787475  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:58.809951  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:58.809970  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:58.877531  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:58.877584  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:58.877595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:58.944804  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:58.944823  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:01.474302  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:01.485101  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:01.485163  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:01.512067  832221 cri.go:89] found id: ""
	I1208 00:41:01.512081  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.512094  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:01.512100  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:01.512173  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:01.538625  832221 cri.go:89] found id: ""
	I1208 00:41:01.538639  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.538646  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:01.538651  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:01.538712  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:01.564246  832221 cri.go:89] found id: ""
	I1208 00:41:01.564260  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.564268  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:01.564273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:01.564341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:01.590766  832221 cri.go:89] found id: ""
	I1208 00:41:01.590780  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.590787  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:01.590793  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:01.590880  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:01.618080  832221 cri.go:89] found id: ""
	I1208 00:41:01.618095  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.618102  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:01.618107  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:01.618166  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:01.644849  832221 cri.go:89] found id: ""
	I1208 00:41:01.644864  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.644872  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:01.644878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:01.644943  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:01.670907  832221 cri.go:89] found id: ""
	I1208 00:41:01.670927  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.670945  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:01.670953  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:01.670972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:01.737140  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:01.737160  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:01.756176  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:01.756199  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:01.837855  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:01.837866  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:01.837880  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:01.907644  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:01.907665  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:04.439011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:04.449676  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:04.449738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:04.475094  832221 cri.go:89] found id: ""
	I1208 00:41:04.475107  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.475116  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:04.475122  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:04.475180  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:04.499488  832221 cri.go:89] found id: ""
	I1208 00:41:04.499502  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.499509  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:04.499514  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:04.499574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:04.524302  832221 cri.go:89] found id: ""
	I1208 00:41:04.524315  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.524322  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:04.524328  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:04.524399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:04.550178  832221 cri.go:89] found id: ""
	I1208 00:41:04.550192  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.550207  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:04.550214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:04.550290  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:04.579863  832221 cri.go:89] found id: ""
	I1208 00:41:04.579876  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.579883  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:04.579888  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:04.579947  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:04.612186  832221 cri.go:89] found id: ""
	I1208 00:41:04.612200  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.612207  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:04.612212  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:04.612268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:04.638270  832221 cri.go:89] found id: ""
	I1208 00:41:04.638291  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.638298  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:04.638305  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:04.638316  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:04.704479  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:04.704498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:04.721141  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:04.721158  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:04.791977  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:04.791987  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:04.792009  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:04.869143  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:04.869164  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:07.399175  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:07.409630  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:07.409692  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:07.436029  832221 cri.go:89] found id: ""
	I1208 00:41:07.436051  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.436059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:07.436065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:07.436133  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:07.462353  832221 cri.go:89] found id: ""
	I1208 00:41:07.462367  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.462374  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:07.462379  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:07.462438  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:07.488128  832221 cri.go:89] found id: ""
	I1208 00:41:07.488142  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.488149  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:07.488154  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:07.488217  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:07.516680  832221 cri.go:89] found id: ""
	I1208 00:41:07.516694  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.516700  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:07.516705  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:07.516761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:07.541724  832221 cri.go:89] found id: ""
	I1208 00:41:07.541738  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.541747  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:07.541752  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:07.541809  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:07.566019  832221 cri.go:89] found id: ""
	I1208 00:41:07.566033  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.566049  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:07.566055  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:07.566120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:07.590763  832221 cri.go:89] found id: ""
	I1208 00:41:07.590786  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.590793  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:07.590800  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:07.590811  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:07.655603  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:07.655627  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:07.672718  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:07.672735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:07.739768  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:07.739777  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:07.739788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:07.818332  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:07.818351  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:10.352542  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:10.362750  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:10.362807  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:10.387611  832221 cri.go:89] found id: ""
	I1208 00:41:10.387625  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.387631  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:10.387637  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:10.387702  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:10.416324  832221 cri.go:89] found id: ""
	I1208 00:41:10.416338  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.416344  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:10.416349  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:10.416407  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:10.441107  832221 cri.go:89] found id: ""
	I1208 00:41:10.441121  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.441128  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:10.441133  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:10.441199  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:10.469633  832221 cri.go:89] found id: ""
	I1208 00:41:10.469646  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.469659  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:10.469664  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:10.469723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:10.494876  832221 cri.go:89] found id: ""
	I1208 00:41:10.494890  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.494896  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:10.494902  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:10.494960  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:10.531392  832221 cri.go:89] found id: ""
	I1208 00:41:10.531407  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.531414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:10.531419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:10.531488  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:10.564042  832221 cri.go:89] found id: ""
	I1208 00:41:10.564056  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.564063  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:10.564072  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:10.564082  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:10.630069  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:10.630089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:10.647244  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:10.647260  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:10.722704  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:10.722715  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:10.722727  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:10.795845  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:10.795865  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.326398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:13.336729  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:13.336789  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:13.362204  832221 cri.go:89] found id: ""
	I1208 00:41:13.362218  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.362225  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:13.362231  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:13.362288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:13.387741  832221 cri.go:89] found id: ""
	I1208 00:41:13.387755  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.387762  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:13.387767  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:13.387825  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:13.416495  832221 cri.go:89] found id: ""
	I1208 00:41:13.416508  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.416515  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:13.416520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:13.416580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:13.442986  832221 cri.go:89] found id: ""
	I1208 00:41:13.443000  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.443008  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:13.443015  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:13.443074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:13.468540  832221 cri.go:89] found id: ""
	I1208 00:41:13.468555  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.468562  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:13.468568  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:13.468626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:13.494472  832221 cri.go:89] found id: ""
	I1208 00:41:13.494487  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.494494  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:13.494500  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:13.494561  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:13.521305  832221 cri.go:89] found id: ""
	I1208 00:41:13.521318  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.521325  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:13.521333  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:13.521347  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.553343  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:13.553359  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:13.621324  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:13.621342  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:13.638433  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:13.638450  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:13.707199  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:13.707209  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:13.707232  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.276942  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:16.286989  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:16.287051  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:16.312004  832221 cri.go:89] found id: ""
	I1208 00:41:16.312018  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.312025  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:16.312031  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:16.312090  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:16.336677  832221 cri.go:89] found id: ""
	I1208 00:41:16.336691  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.336698  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:16.336703  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:16.336763  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:16.361556  832221 cri.go:89] found id: ""
	I1208 00:41:16.361579  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.361587  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:16.361592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:16.361661  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:16.386950  832221 cri.go:89] found id: ""
	I1208 00:41:16.386964  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.386971  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:16.386977  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:16.387045  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:16.413845  832221 cri.go:89] found id: ""
	I1208 00:41:16.413867  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.413877  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:16.413883  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:16.413949  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:16.439928  832221 cri.go:89] found id: ""
	I1208 00:41:16.439942  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.439959  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:16.439965  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:16.440030  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:16.466154  832221 cri.go:89] found id: ""
	I1208 00:41:16.466176  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.466183  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:16.466191  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:16.466201  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.533106  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:16.533124  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:16.563727  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:16.563742  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:16.633732  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:16.633751  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:16.650899  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:16.650917  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:16.719345  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.221010  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:19.231342  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:19.231406  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:19.257316  832221 cri.go:89] found id: ""
	I1208 00:41:19.257330  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.257337  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:19.257343  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:19.257401  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:19.283560  832221 cri.go:89] found id: ""
	I1208 00:41:19.283574  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.283581  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:19.283586  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:19.283645  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:19.309316  832221 cri.go:89] found id: ""
	I1208 00:41:19.309332  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.309339  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:19.309344  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:19.309404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:19.336530  832221 cri.go:89] found id: ""
	I1208 00:41:19.336544  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.336551  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:19.336558  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:19.336617  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:19.362493  832221 cri.go:89] found id: ""
	I1208 00:41:19.362507  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.362515  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:19.362520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:19.362580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:19.388582  832221 cri.go:89] found id: ""
	I1208 00:41:19.388602  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.388609  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:19.388614  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:19.388671  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:19.414534  832221 cri.go:89] found id: ""
	I1208 00:41:19.414547  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.414554  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:19.414562  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:19.414573  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:19.478886  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.478896  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:19.478908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:19.547311  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:19.547330  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:19.577785  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:19.577801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:19.643881  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:19.643902  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.161081  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:22.171521  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:22.171585  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:22.198382  832221 cri.go:89] found id: ""
	I1208 00:41:22.198396  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.198413  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:22.198418  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:22.198474  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:22.224532  832221 cri.go:89] found id: ""
	I1208 00:41:22.224547  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.224554  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:22.224560  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:22.224618  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:22.250646  832221 cri.go:89] found id: ""
	I1208 00:41:22.250660  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.250667  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:22.250672  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:22.250738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:22.276120  832221 cri.go:89] found id: ""
	I1208 00:41:22.276134  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.276141  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:22.276146  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:22.276204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:22.307378  832221 cri.go:89] found id: ""
	I1208 00:41:22.307392  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.307399  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:22.307405  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:22.307481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:22.332887  832221 cri.go:89] found id: ""
	I1208 00:41:22.332902  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.332909  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:22.332915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:22.332973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:22.359765  832221 cri.go:89] found id: ""
	I1208 00:41:22.359790  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.359799  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:22.359806  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:22.359817  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:22.429639  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:22.429667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.446411  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:22.446429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:22.514425  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:22.514437  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:22.514449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:22.582646  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:22.582668  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.113244  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:25.123522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:25.123581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:25.149789  832221 cri.go:89] found id: ""
	I1208 00:41:25.149803  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.149811  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:25.149816  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:25.149877  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:25.175748  832221 cri.go:89] found id: ""
	I1208 00:41:25.175780  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.175787  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:25.175793  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:25.175860  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:25.201633  832221 cri.go:89] found id: ""
	I1208 00:41:25.201647  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.201654  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:25.201660  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:25.201718  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:25.226256  832221 cri.go:89] found id: ""
	I1208 00:41:25.226270  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.226276  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:25.226282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:25.226340  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:25.251247  832221 cri.go:89] found id: ""
	I1208 00:41:25.251260  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.251267  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:25.251272  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:25.251332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:25.276489  832221 cri.go:89] found id: ""
	I1208 00:41:25.276502  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.276509  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:25.276514  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:25.276571  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:25.304102  832221 cri.go:89] found id: ""
	I1208 00:41:25.304116  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.304123  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:25.304131  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:25.304141  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.334560  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:25.334578  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:25.403772  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:25.403794  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:25.420560  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:25.420577  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:25.482668  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:25.482678  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:25.482689  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.050629  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:28.061960  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:28.062020  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:28.089309  832221 cri.go:89] found id: ""
	I1208 00:41:28.089322  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.089330  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:28.089335  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:28.089394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:28.114535  832221 cri.go:89] found id: ""
	I1208 00:41:28.114549  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.114556  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:28.114561  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:28.114620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:28.139191  832221 cri.go:89] found id: ""
	I1208 00:41:28.139205  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.139212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:28.139218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:28.139281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:28.169942  832221 cri.go:89] found id: ""
	I1208 00:41:28.169956  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.169963  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:28.169968  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:28.170026  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:28.194906  832221 cri.go:89] found id: ""
	I1208 00:41:28.194920  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.194927  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:28.194932  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:28.194991  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:28.220745  832221 cri.go:89] found id: ""
	I1208 00:41:28.220759  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.220766  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:28.220772  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:28.220831  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:28.246098  832221 cri.go:89] found id: ""
	I1208 00:41:28.246113  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.246128  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:28.246137  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:28.246147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:28.311151  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:28.311171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:28.328051  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:28.328067  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:28.392162  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:28.392172  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:28.392183  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.461355  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:28.461376  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:30.991861  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:31.002524  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:31.002603  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:31.053691  832221 cri.go:89] found id: ""
	I1208 00:41:31.053708  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.053715  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:31.053725  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:31.053785  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:31.089132  832221 cri.go:89] found id: ""
	I1208 00:41:31.089146  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.089163  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:31.089169  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:31.089252  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:31.121093  832221 cri.go:89] found id: ""
	I1208 00:41:31.121107  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.121114  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:31.121120  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:31.121193  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:31.148473  832221 cri.go:89] found id: ""
	I1208 00:41:31.148502  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.148510  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:31.148517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:31.148576  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:31.174204  832221 cri.go:89] found id: ""
	I1208 00:41:31.174218  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.174225  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:31.174231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:31.174291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:31.199996  832221 cri.go:89] found id: ""
	I1208 00:41:31.200009  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.200016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:31.200021  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:31.200079  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:31.224662  832221 cri.go:89] found id: ""
	I1208 00:41:31.224674  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.224681  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:31.224689  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:31.224699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:31.291397  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:31.291417  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:31.308061  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:31.308078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:31.372069  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:31.372079  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:31.372089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:31.443951  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:31.443972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:33.976603  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:33.987054  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:33.987113  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:34.031182  832221 cri.go:89] found id: ""
	I1208 00:41:34.031197  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.031205  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:34.031211  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:34.031285  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:34.060124  832221 cri.go:89] found id: ""
	I1208 00:41:34.060137  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.060145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:34.060150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:34.060207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:34.092539  832221 cri.go:89] found id: ""
	I1208 00:41:34.092553  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.092560  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:34.092565  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:34.092627  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:34.121995  832221 cri.go:89] found id: ""
	I1208 00:41:34.122009  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.122016  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:34.122022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:34.122077  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:34.150463  832221 cri.go:89] found id: ""
	I1208 00:41:34.150476  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.150483  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:34.150488  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:34.150549  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:34.177998  832221 cri.go:89] found id: ""
	I1208 00:41:34.178021  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.178029  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:34.178034  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:34.178102  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:34.202722  832221 cri.go:89] found id: ""
	I1208 00:41:34.202737  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.202744  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:34.202751  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:34.202761  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:34.267650  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:34.267670  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:34.284346  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:34.284364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:34.348837  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:34.348848  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:34.348858  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:34.417091  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:34.417112  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:36.948347  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:36.958825  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:36.958908  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:36.984186  832221 cri.go:89] found id: ""
	I1208 00:41:36.984200  832221 logs.go:282] 0 containers: []
	W1208 00:41:36.984207  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:36.984212  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:36.984269  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:37.020431  832221 cri.go:89] found id: ""
	I1208 00:41:37.020446  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.020454  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:37.020460  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:37.020530  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:37.067191  832221 cri.go:89] found id: ""
	I1208 00:41:37.067205  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.067212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:37.067218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:37.067294  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:37.094272  832221 cri.go:89] found id: ""
	I1208 00:41:37.094286  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.094293  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:37.094298  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:37.094355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:37.119686  832221 cri.go:89] found id: ""
	I1208 00:41:37.119709  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.119716  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:37.119722  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:37.119787  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:37.145200  832221 cri.go:89] found id: ""
	I1208 00:41:37.145214  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.145221  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:37.145227  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:37.145288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:37.171336  832221 cri.go:89] found id: ""
	I1208 00:41:37.171350  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.171357  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:37.171364  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:37.171375  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:37.237645  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:37.237664  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:37.254543  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:37.254560  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:37.322370  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:37.322380  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:37.322392  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:37.391923  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:37.391943  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:39.926099  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:39.936345  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:39.936412  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:39.962579  832221 cri.go:89] found id: ""
	I1208 00:41:39.962593  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.962600  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:39.962605  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:39.962669  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:39.989842  832221 cri.go:89] found id: ""
	I1208 00:41:39.989856  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.989863  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:39.989868  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:39.989926  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:40.044295  832221 cri.go:89] found id: ""
	I1208 00:41:40.044310  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.044325  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:40.044339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:40.044416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:40.079243  832221 cri.go:89] found id: ""
	I1208 00:41:40.079258  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.079266  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:40.079273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:40.079349  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:40.112934  832221 cri.go:89] found id: ""
	I1208 00:41:40.112948  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.112956  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:40.112961  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:40.113039  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:40.143499  832221 cri.go:89] found id: ""
	I1208 00:41:40.143513  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.143521  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:40.143526  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:40.143587  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:40.169504  832221 cri.go:89] found id: ""
	I1208 00:41:40.169519  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.169526  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:40.169533  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:40.169544  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:40.235615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:40.235638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:40.252840  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:40.252857  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:40.321804  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:40.321814  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:40.321827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:40.390368  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:40.390389  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:42.923500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:42.933619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:42.933678  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:42.959506  832221 cri.go:89] found id: ""
	I1208 00:41:42.959520  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.959527  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:42.959533  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:42.959596  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:42.984924  832221 cri.go:89] found id: ""
	I1208 00:41:42.984937  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.984946  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:42.984951  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:42.985013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:43.023875  832221 cri.go:89] found id: ""
	I1208 00:41:43.023889  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.023896  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:43.023903  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:43.023962  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:43.053076  832221 cri.go:89] found id: ""
	I1208 00:41:43.053090  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.053097  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:43.053102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:43.053185  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:43.084087  832221 cri.go:89] found id: ""
	I1208 00:41:43.084101  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.084108  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:43.084113  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:43.084174  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:43.109712  832221 cri.go:89] found id: ""
	I1208 00:41:43.109737  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.109746  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:43.109751  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:43.109817  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:43.134863  832221 cri.go:89] found id: ""
	I1208 00:41:43.134877  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.134886  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:43.134894  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:43.134908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:43.201957  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:43.201967  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:43.201982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:43.273086  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:43.273107  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:43.305154  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:43.305177  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:43.373686  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:43.373708  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:45.892403  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:45.902913  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:45.902990  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:45.927841  832221 cri.go:89] found id: ""
	I1208 00:41:45.927855  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.927862  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:45.927868  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:45.927927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:45.952154  832221 cri.go:89] found id: ""
	I1208 00:41:45.952167  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.952174  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:45.952179  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:45.952236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:45.979675  832221 cri.go:89] found id: ""
	I1208 00:41:45.979688  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.979696  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:45.979700  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:45.979755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:46.013259  832221 cri.go:89] found id: ""
	I1208 00:41:46.013273  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.013280  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:46.013285  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:46.013351  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:46.042352  832221 cri.go:89] found id: ""
	I1208 00:41:46.042366  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.042372  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:46.042377  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:46.042440  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:46.070733  832221 cri.go:89] found id: ""
	I1208 00:41:46.070746  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.070753  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:46.070763  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:46.070823  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:46.098473  832221 cri.go:89] found id: ""
	I1208 00:41:46.098487  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.098494  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:46.098502  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:46.098512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:46.125193  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:46.125209  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:46.193253  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:46.193274  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:46.210082  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:46.210099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:46.276709  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:46.276719  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:46.276730  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:48.845307  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:48.856005  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:48.856069  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:48.880627  832221 cri.go:89] found id: ""
	I1208 00:41:48.880643  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.880650  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:48.880655  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:48.880723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:48.910676  832221 cri.go:89] found id: ""
	I1208 00:41:48.910691  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.910699  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:48.910704  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:48.910765  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:48.937001  832221 cri.go:89] found id: ""
	I1208 00:41:48.937015  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.937022  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:48.937027  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:48.937087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:48.961464  832221 cri.go:89] found id: ""
	I1208 00:41:48.961478  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.961484  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:48.961489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:48.961546  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:48.985593  832221 cri.go:89] found id: ""
	I1208 00:41:48.985607  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.985614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:48.985618  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:48.985673  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:49.021903  832221 cri.go:89] found id: ""
	I1208 00:41:49.021917  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.021924  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:49.021929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:49.021987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:49.051822  832221 cri.go:89] found id: ""
	I1208 00:41:49.051835  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.051842  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:49.051850  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:49.051860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:49.119331  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:49.119350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:49.136412  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:49.136429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:49.209120  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:49.209130  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:49.209142  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:49.281668  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:49.281696  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:51.816189  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:51.826432  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:51.826508  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:51.852549  832221 cri.go:89] found id: ""
	I1208 00:41:51.852563  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.852570  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:51.852575  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:51.852639  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:51.882102  832221 cri.go:89] found id: ""
	I1208 00:41:51.882115  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.882123  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:51.882128  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:51.882183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:51.908918  832221 cri.go:89] found id: ""
	I1208 00:41:51.908931  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.908938  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:51.908943  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:51.908999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:51.933704  832221 cri.go:89] found id: ""
	I1208 00:41:51.933718  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.933725  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:51.933731  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:51.933786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:51.959460  832221 cri.go:89] found id: ""
	I1208 00:41:51.959474  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.959480  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:51.959485  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:51.959543  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:51.985138  832221 cri.go:89] found id: ""
	I1208 00:41:51.985151  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.985158  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:51.985170  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:51.985229  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:52.017078  832221 cri.go:89] found id: ""
	I1208 00:41:52.017092  832221 logs.go:282] 0 containers: []
	W1208 00:41:52.017100  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:52.017108  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:52.017118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:52.061579  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:52.061595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:52.130427  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:52.130446  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:52.146893  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:52.146909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:52.216088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:52.216098  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:52.216109  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:54.782500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:54.793061  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:54.793123  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:54.818661  832221 cri.go:89] found id: ""
	I1208 00:41:54.818675  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.818682  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:54.818688  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:54.818747  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:54.843336  832221 cri.go:89] found id: ""
	I1208 00:41:54.843351  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.843358  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:54.843363  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:54.843423  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:54.873031  832221 cri.go:89] found id: ""
	I1208 00:41:54.873045  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.873052  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:54.873057  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:54.873114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:54.904194  832221 cri.go:89] found id: ""
	I1208 00:41:54.904208  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.904215  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:54.904221  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:54.904281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:54.928355  832221 cri.go:89] found id: ""
	I1208 00:41:54.928370  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.928377  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:54.928382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:54.928441  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:54.954187  832221 cri.go:89] found id: ""
	I1208 00:41:54.954201  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.954208  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:54.954214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:54.954277  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:54.979288  832221 cri.go:89] found id: ""
	I1208 00:41:54.979301  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.979308  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:54.979316  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:54.979329  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:55.047402  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:55.047422  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:55.065193  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:55.065210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:55.134035  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:55.134045  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:55.134056  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:55.202635  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:55.202656  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:57.732860  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:57.743009  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:57.743070  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:57.769255  832221 cri.go:89] found id: ""
	I1208 00:41:57.769270  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.769277  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:57.769282  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:57.769341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:57.796071  832221 cri.go:89] found id: ""
	I1208 00:41:57.796084  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.796092  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:57.796097  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:57.796152  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:57.821305  832221 cri.go:89] found id: ""
	I1208 00:41:57.821319  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.821326  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:57.821331  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:57.821389  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:57.850632  832221 cri.go:89] found id: ""
	I1208 00:41:57.850646  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.850653  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:57.850658  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:57.850715  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:57.874739  832221 cri.go:89] found id: ""
	I1208 00:41:57.874753  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.874760  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:57.874766  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:57.874829  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:57.898660  832221 cri.go:89] found id: ""
	I1208 00:41:57.898674  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.898681  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:57.898687  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:57.898744  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:57.924451  832221 cri.go:89] found id: ""
	I1208 00:41:57.924465  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.924472  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:57.924480  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:57.924490  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:57.990717  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:57.990739  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:58.009617  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:58.009637  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:58.089328  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:58.089339  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:58.089350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:58.158129  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:58.158149  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:00.692822  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:00.703351  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:00.703413  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:00.730817  832221 cri.go:89] found id: ""
	I1208 00:42:00.730831  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.730838  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:00.730864  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:00.730925  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:00.757577  832221 cri.go:89] found id: ""
	I1208 00:42:00.757591  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.757599  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:00.757604  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:00.757668  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:00.784124  832221 cri.go:89] found id: ""
	I1208 00:42:00.784140  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.784147  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:00.784153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:00.784213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:00.811121  832221 cri.go:89] found id: ""
	I1208 00:42:00.811136  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.811143  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:00.811149  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:00.811207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:00.838124  832221 cri.go:89] found id: ""
	I1208 00:42:00.838139  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.838147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:00.838153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:00.838216  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:00.864699  832221 cri.go:89] found id: ""
	I1208 00:42:00.864713  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.864720  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:00.864726  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:00.864786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:00.890750  832221 cri.go:89] found id: ""
	I1208 00:42:00.890772  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.890780  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:00.890788  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:00.890799  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:00.956810  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:00.956830  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:00.973943  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:00.973959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:01.050555  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:01.050566  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:01.050579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:01.129234  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:01.129257  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:03.659413  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:03.669877  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:03.669937  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:03.696297  832221 cri.go:89] found id: ""
	I1208 00:42:03.696316  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.696324  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:03.696329  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:03.696388  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:03.722691  832221 cri.go:89] found id: ""
	I1208 00:42:03.722706  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.722713  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:03.722718  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:03.722777  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:03.749319  832221 cri.go:89] found id: ""
	I1208 00:42:03.749336  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.749343  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:03.749348  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:03.749409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:03.778235  832221 cri.go:89] found id: ""
	I1208 00:42:03.778250  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.778257  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:03.778262  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:03.778323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:03.805566  832221 cri.go:89] found id: ""
	I1208 00:42:03.805579  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.805586  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:03.805592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:03.805656  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:03.835418  832221 cri.go:89] found id: ""
	I1208 00:42:03.835434  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.835441  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:03.835447  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:03.835507  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:03.862034  832221 cri.go:89] found id: ""
	I1208 00:42:03.862048  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.862056  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:03.862063  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:03.862074  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:03.926004  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:03.926014  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:03.926025  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:03.994473  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:03.994491  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:04.028498  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:04.028530  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:04.103887  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:04.103913  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:06.621744  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:06.631952  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:06.632014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:06.656834  832221 cri.go:89] found id: ""
	I1208 00:42:06.656847  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.656855  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:06.656859  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:06.656915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:06.681945  832221 cri.go:89] found id: ""
	I1208 00:42:06.681960  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.681967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:06.681972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:06.682029  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:06.710714  832221 cri.go:89] found id: ""
	I1208 00:42:06.710728  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.710735  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:06.710741  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:06.710798  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:06.737689  832221 cri.go:89] found id: ""
	I1208 00:42:06.737703  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.737710  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:06.737716  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:06.737773  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:06.763380  832221 cri.go:89] found id: ""
	I1208 00:42:06.763394  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.763401  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:06.763406  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:06.763468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:06.788657  832221 cri.go:89] found id: ""
	I1208 00:42:06.788672  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.788679  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:06.788684  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:06.788743  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:06.814619  832221 cri.go:89] found id: ""
	I1208 00:42:06.814633  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.814641  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:06.814648  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:06.814659  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:06.876947  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:06.876957  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:06.876967  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:06.945083  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:06.945103  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:06.975476  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:06.975492  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:07.049079  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:07.049111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.568507  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:09.578816  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:09.578896  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:09.604243  832221 cri.go:89] found id: ""
	I1208 00:42:09.604264  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.604271  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:09.604276  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:09.604335  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:09.629065  832221 cri.go:89] found id: ""
	I1208 00:42:09.629079  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.629086  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:09.629091  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:09.629187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:09.657275  832221 cri.go:89] found id: ""
	I1208 00:42:09.657288  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.657295  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:09.657300  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:09.657356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:09.683416  832221 cri.go:89] found id: ""
	I1208 00:42:09.683431  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.683438  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:09.683443  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:09.683500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:09.709238  832221 cri.go:89] found id: ""
	I1208 00:42:09.709261  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.709269  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:09.709274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:09.709339  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:09.734114  832221 cri.go:89] found id: ""
	I1208 00:42:09.734128  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.734134  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:09.734152  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:09.734209  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:09.759311  832221 cri.go:89] found id: ""
	I1208 00:42:09.759325  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.759331  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:09.759339  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:09.759349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:09.824496  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:09.824516  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.841803  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:09.841820  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:09.904180  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:09.904190  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:09.904207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:09.971074  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:09.971095  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:12.508051  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:12.518216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:12.518274  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:12.544077  832221 cri.go:89] found id: ""
	I1208 00:42:12.544098  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.544105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:12.544121  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:12.544183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:12.573722  832221 cri.go:89] found id: ""
	I1208 00:42:12.573737  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.573744  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:12.573749  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:12.573814  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:12.605486  832221 cri.go:89] found id: ""
	I1208 00:42:12.605500  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.605508  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:12.605513  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:12.605573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:12.630248  832221 cri.go:89] found id: ""
	I1208 00:42:12.630262  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.630269  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:12.630274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:12.630334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:12.657639  832221 cri.go:89] found id: ""
	I1208 00:42:12.657653  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.657660  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:12.657665  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:12.657729  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:12.687466  832221 cri.go:89] found id: ""
	I1208 00:42:12.687488  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.687495  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:12.687501  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:12.687560  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:12.712697  832221 cri.go:89] found id: ""
	I1208 00:42:12.712713  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.712720  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:12.712729  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:12.712740  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:12.782236  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:12.782256  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:12.798869  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:12.798890  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:12.869748  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:12.869759  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:12.869772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:12.940819  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:12.940839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:15.471472  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:15.481993  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:15.482061  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:15.508029  832221 cri.go:89] found id: ""
	I1208 00:42:15.508043  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.508050  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:15.508055  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:15.508114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:15.533198  832221 cri.go:89] found id: ""
	I1208 00:42:15.533212  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.533219  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:15.533224  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:15.533293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:15.559200  832221 cri.go:89] found id: ""
	I1208 00:42:15.559215  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.559222  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:15.559230  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:15.559292  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:15.586368  832221 cri.go:89] found id: ""
	I1208 00:42:15.586382  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.586389  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:15.586394  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:15.586463  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:15.613829  832221 cri.go:89] found id: ""
	I1208 00:42:15.613862  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.613870  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:15.613875  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:15.613939  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:15.638601  832221 cri.go:89] found id: ""
	I1208 00:42:15.638616  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.638623  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:15.638629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:15.638687  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:15.663577  832221 cri.go:89] found id: ""
	I1208 00:42:15.663592  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.663599  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:15.663606  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:15.663617  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:15.729315  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:15.729346  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:15.746062  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:15.746081  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:15.817222  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:15.817234  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:15.817246  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:15.884896  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:15.884916  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.414159  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:18.424398  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:18.424464  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:18.454155  832221 cri.go:89] found id: ""
	I1208 00:42:18.454169  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.454177  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:18.454183  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:18.454245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:18.479882  832221 cri.go:89] found id: ""
	I1208 00:42:18.479896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.479904  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:18.479909  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:18.479969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:18.505299  832221 cri.go:89] found id: ""
	I1208 00:42:18.505313  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.505320  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:18.505325  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:18.505383  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:18.532868  832221 cri.go:89] found id: ""
	I1208 00:42:18.532881  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.532889  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:18.532894  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:18.532954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:18.561651  832221 cri.go:89] found id: ""
	I1208 00:42:18.561664  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.561671  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:18.561677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:18.561735  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:18.589482  832221 cri.go:89] found id: ""
	I1208 00:42:18.589496  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.589503  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:18.589509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:18.589566  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:18.613882  832221 cri.go:89] found id: ""
	I1208 00:42:18.613896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.613904  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:18.613911  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:18.613922  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.641758  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:18.641774  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:18.717185  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:18.717210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:18.734137  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:18.734155  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:18.802653  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:18.802664  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:18.802676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.371665  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:21.383636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:21.383698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:21.408072  832221 cri.go:89] found id: ""
	I1208 00:42:21.408086  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.408093  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:21.408098  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:21.408155  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:21.432924  832221 cri.go:89] found id: ""
	I1208 00:42:21.432948  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.432955  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:21.432961  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:21.433025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:21.457883  832221 cri.go:89] found id: ""
	I1208 00:42:21.457897  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.457904  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:21.457909  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:21.457967  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:21.483388  832221 cri.go:89] found id: ""
	I1208 00:42:21.483402  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.483410  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:21.483415  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:21.483475  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:21.509434  832221 cri.go:89] found id: ""
	I1208 00:42:21.509448  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.509456  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:21.509461  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:21.509519  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:21.534437  832221 cri.go:89] found id: ""
	I1208 00:42:21.534451  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.534458  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:21.534464  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:21.534521  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:21.559919  832221 cri.go:89] found id: ""
	I1208 00:42:21.559932  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.559939  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:21.559949  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:21.559959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:21.625640  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:21.625661  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:21.645629  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:21.645648  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:21.714153  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:21.714163  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:21.714173  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.781175  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:21.781196  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:24.310973  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:24.321986  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:24.322048  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:24.348885  832221 cri.go:89] found id: ""
	I1208 00:42:24.348899  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.348906  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:24.348912  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:24.348972  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:24.378380  832221 cri.go:89] found id: ""
	I1208 00:42:24.378394  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.378401  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:24.378407  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:24.378468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:24.403905  832221 cri.go:89] found id: ""
	I1208 00:42:24.403922  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.403933  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:24.403938  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:24.404014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:24.433947  832221 cri.go:89] found id: ""
	I1208 00:42:24.433961  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.433969  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:24.433975  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:24.434037  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:24.459342  832221 cri.go:89] found id: ""
	I1208 00:42:24.459356  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.459363  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:24.459368  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:24.459429  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:24.484750  832221 cri.go:89] found id: ""
	I1208 00:42:24.484764  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.484771  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:24.484777  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:24.484832  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:24.514464  832221 cri.go:89] found id: ""
	I1208 00:42:24.514478  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.514493  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:24.514501  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:24.514512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:24.580016  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:24.580037  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:24.598055  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:24.598071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:24.664079  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:24.664089  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:24.664099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:24.733616  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:24.733639  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:27.263764  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:27.274828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:27.274913  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:27.305226  832221 cri.go:89] found id: ""
	I1208 00:42:27.305241  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.305248  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:27.305253  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:27.305312  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:27.330800  832221 cri.go:89] found id: ""
	I1208 00:42:27.330815  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.330822  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:27.330827  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:27.330914  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:27.357232  832221 cri.go:89] found id: ""
	I1208 00:42:27.357246  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.357253  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:27.357258  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:27.357314  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:27.385173  832221 cri.go:89] found id: ""
	I1208 00:42:27.385186  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.385193  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:27.385199  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:27.385264  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:27.415410  832221 cri.go:89] found id: ""
	I1208 00:42:27.415423  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.415430  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:27.415435  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:27.415491  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:27.441114  832221 cri.go:89] found id: ""
	I1208 00:42:27.441128  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.441135  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:27.441140  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:27.441204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:27.468819  832221 cri.go:89] found id: ""
	I1208 00:42:27.468833  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.468841  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:27.468849  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:27.468859  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:27.534615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:27.534638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:27.552028  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:27.552044  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:27.617298  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:27.617308  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:27.617318  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:27.685006  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:27.685026  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.213024  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:30.223536  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:30.223597  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:30.252285  832221 cri.go:89] found id: ""
	I1208 00:42:30.252299  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.252306  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:30.252311  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:30.252378  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:30.283908  832221 cri.go:89] found id: ""
	I1208 00:42:30.283922  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.283931  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:30.283936  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:30.283994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:30.318884  832221 cri.go:89] found id: ""
	I1208 00:42:30.318899  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.318906  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:30.318912  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:30.318968  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:30.349060  832221 cri.go:89] found id: ""
	I1208 00:42:30.349075  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.349082  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:30.349088  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:30.349164  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:30.376813  832221 cri.go:89] found id: ""
	I1208 00:42:30.376829  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.376837  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:30.376842  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:30.376901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:30.404729  832221 cri.go:89] found id: ""
	I1208 00:42:30.404744  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.404750  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:30.404756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:30.404819  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:30.431212  832221 cri.go:89] found id: ""
	I1208 00:42:30.431226  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.431233  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:30.431241  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:30.431251  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:30.498900  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:30.498911  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:30.498921  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:30.567676  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:30.567699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.596733  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:30.596749  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:30.662190  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:30.662211  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:33.179806  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:33.190715  832221 kubeadm.go:602] duration metric: took 4m2.701897978s to restartPrimaryControlPlane
	W1208 00:42:33.190784  832221 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:42:33.190886  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:42:33.600155  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:42:33.612954  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:42:33.620726  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:42:33.620779  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:42:33.628462  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:42:33.628471  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:42:33.628522  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:42:33.636365  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:42:33.636420  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:42:33.643722  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:42:33.651305  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:42:33.651360  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:42:33.658707  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.666176  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:42:33.666232  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.673523  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:42:33.681031  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:42:33.681086  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:42:33.688609  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:42:33.724887  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:42:33.724941  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:42:33.797997  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:42:33.798062  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:42:33.798096  832221 kubeadm.go:319] OS: Linux
	I1208 00:42:33.798139  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:42:33.798186  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:42:33.798232  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:42:33.798279  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:42:33.798325  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:42:33.798372  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:42:33.798416  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:42:33.798462  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:42:33.798507  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:42:33.859952  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:42:33.860071  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:42:33.860170  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:42:33.868067  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:42:33.869917  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:42:33.869999  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:42:33.870063  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:42:33.870137  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:42:33.870197  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:42:33.870265  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:42:33.870368  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:42:33.870448  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:42:33.870928  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:42:33.871217  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:42:33.871538  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:42:33.871740  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:42:33.871797  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:42:34.028121  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:42:34.367427  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:42:34.702083  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:42:35.025762  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:42:35.511131  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:42:35.511826  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:42:35.514836  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:42:35.516409  832221 out.go:252]   - Booting up control plane ...
	I1208 00:42:35.516507  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:42:35.516848  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:42:35.519384  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:42:35.533955  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:42:35.534084  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:42:35.541753  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:42:35.542016  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:42:35.542213  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:42:35.674531  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:42:35.674638  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:46:35.675373  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115059s
	I1208 00:46:35.675397  832221 kubeadm.go:319] 
	I1208 00:46:35.675450  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:46:35.675480  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:46:35.675578  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:46:35.675582  832221 kubeadm.go:319] 
	I1208 00:46:35.675680  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:46:35.675709  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:46:35.675738  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:46:35.675741  832221 kubeadm.go:319] 
	I1208 00:46:35.680376  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:46:35.680807  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:46:35.680915  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:46:35.681162  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:46:35.681167  832221 kubeadm.go:319] 
	I1208 00:46:35.681238  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:46:35.681347  832221 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115059s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:46:35.681436  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:46:36.099633  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:46:36.112518  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:46:36.112573  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:46:36.120714  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:46:36.120723  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:46:36.120772  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:46:36.128165  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:46:36.128218  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:46:36.135603  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:46:36.142958  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:46:36.143011  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:46:36.150557  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.158107  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:46:36.158166  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.165315  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:46:36.172678  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:46:36.172733  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:46:36.179983  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:46:36.221281  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:46:36.221576  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:46:36.304904  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:46:36.304971  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:46:36.305006  832221 kubeadm.go:319] OS: Linux
	I1208 00:46:36.305062  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:46:36.305109  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:46:36.305154  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:46:36.305201  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:46:36.305247  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:46:36.305299  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:46:36.305343  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:46:36.305391  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:46:36.305437  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:46:36.375885  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:46:36.375986  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:46:36.376075  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:46:36.387291  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:46:36.389104  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:46:36.389182  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:46:36.389272  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:46:36.389371  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:46:36.389436  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:46:36.389506  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:46:36.389559  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:46:36.389626  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:46:36.389691  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:46:36.389770  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:46:36.389858  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:46:36.389893  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:46:36.389946  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:46:37.029886  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:46:37.175943  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:46:37.229666  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:46:37.386162  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:46:37.721262  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:46:37.722365  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:46:37.726361  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:46:37.727820  832221 out.go:252]   - Booting up control plane ...
	I1208 00:46:37.727919  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:46:37.727991  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:46:37.728873  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:46:37.743822  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:46:37.744021  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:46:37.751812  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:46:37.751899  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:46:37.751935  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:46:37.878966  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:46:37.879079  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:50:37.879778  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187421s
	I1208 00:50:37.879803  832221 kubeadm.go:319] 
	I1208 00:50:37.879860  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:50:37.879893  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:50:37.879997  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:50:37.880002  832221 kubeadm.go:319] 
	I1208 00:50:37.880106  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:50:37.880137  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:50:37.880167  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:50:37.880170  832221 kubeadm.go:319] 
	I1208 00:50:37.885162  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:50:37.885617  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:50:37.885748  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:50:37.886002  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:50:37.886010  832221 kubeadm.go:319] 
	I1208 00:50:37.886091  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:50:37.886152  832221 kubeadm.go:403] duration metric: took 12m7.43140026s to StartCluster
	I1208 00:50:37.886198  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:50:37.886263  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:50:37.913929  832221 cri.go:89] found id: ""
	I1208 00:50:37.913943  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.913950  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:50:37.913956  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:50:37.914018  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:50:37.940084  832221 cri.go:89] found id: ""
	I1208 00:50:37.940099  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.940106  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:50:37.940111  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:50:37.940168  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:50:37.965369  832221 cri.go:89] found id: ""
	I1208 00:50:37.965385  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.965392  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:50:37.965397  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:50:37.965454  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:50:37.991902  832221 cri.go:89] found id: ""
	I1208 00:50:37.991916  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.991923  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:50:37.991929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:50:37.991989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:50:38.041593  832221 cri.go:89] found id: ""
	I1208 00:50:38.041607  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.041614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:50:38.041619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:50:38.041681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:50:38.082440  832221 cri.go:89] found id: ""
	I1208 00:50:38.082454  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.082461  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:50:38.082467  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:50:38.082527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:50:38.108776  832221 cri.go:89] found id: ""
	I1208 00:50:38.108794  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.108804  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:50:38.108813  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:50:38.108827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:50:38.179358  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:50:38.179368  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:50:38.179379  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:50:38.249264  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:50:38.249284  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:50:38.283297  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:50:38.283313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:50:38.352336  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:50:38.352356  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 00:50:38.370094  832221 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:50:38.370135  832221 out.go:285] * 
	W1208 00:50:38.370244  832221 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.370347  832221 out.go:285] * 
	W1208 00:50:38.372671  832221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:50:38.375987  832221 out.go:203] 
	W1208 00:50:38.377331  832221 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.377432  832221 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:50:38.377486  832221 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:50:38.378650  832221 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976141949Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976389032Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976505948Z" level=info msg="Create NRI interface"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976728531Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976803559Z" level=info msg="runtime interface created"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976871433Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976925095Z" level=info msg="runtime interface starting up..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976975737Z" level=info msg="starting plugins..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.977043373Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.97717112Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:38:28 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.863535575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=86c63571-1518-417d-8c36-88972a10f046 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864340284Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd30f3d8-2e57-4e42-9d38-12f0c72774a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864886538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2294e0c2-3c35-4ad2-b70e-1cf27e140e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865379712Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8bd0e2b4-0a84-462b-a4c0-b4ef6c82ea6b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865907537Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6aa3aa31-43f2-49f4-affe-a3c22725ca07 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.86644149Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ab7db80c-c2d4-4d6c-acf1-db4a7ce32608 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.867005106Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fe935a58-ea6c-4485-86ff-51db887cec2b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:52:44.095681   23219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:44.096401   23219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:44.098722   23219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:44.099251   23219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:44.101075   23219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:52:44 up  5:34,  0 user,  load average: 0.28, 0.25, 0.42
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:52:41 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:42 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 08 00:52:42 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:42 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:42 functional-525396 kubelet[23090]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:42 functional-525396 kubelet[23090]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:42 functional-525396 kubelet[23090]: E1208 00:52:42.505283   23090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:42 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:42 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:43 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1127.
	Dec 08 00:52:43 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:43 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:43 functional-525396 kubelet[23126]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:43 functional-525396 kubelet[23126]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:43 functional-525396 kubelet[23126]: E1208 00:52:43.197271   23126 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:43 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:43 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:44 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 08 00:52:44 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:44 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:44 functional-525396 kubelet[23212]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:44 functional-525396 kubelet[23212]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:44 functional-525396 kubelet[23212]: E1208 00:52:44.093844   23212 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:44 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:44 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (360.772341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-525396 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-525396 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (52.74882ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-525396 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-525396 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-525396 describe po hello-node-connect: exit status 1 (85.767485ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-525396 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-525396 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-525396 logs -l app=hello-node-connect: exit status 1 (62.201795ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-525396 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-525396 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-525396 describe svc hello-node-connect: exit status 1 (58.487404ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-525396 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (305.489127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-525396 cache delete minikube-local-cache-test:functional-525396                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl images                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ cache   │ functional-525396 cache reload                                                                           │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ ssh     │ functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │ 08 Dec 25 00:38 UTC │
	│ kubectl │ functional-525396 kubectl -- --context functional-525396 get pods                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ start   │ -p functional-525396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:38 UTC │                     │
	│ config  │ functional-525396 config unset cpus                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ ssh     │ functional-525396 ssh echo hello                                                                         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ config  │ functional-525396 config get cpus                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │                     │
	│ config  │ functional-525396 config set cpus 2                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ config  │ functional-525396 config get cpus                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ config  │ functional-525396 config unset cpus                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ ssh     │ functional-525396 ssh cat /etc/hostname                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │ 08 Dec 25 00:50 UTC │
	│ config  │ functional-525396 config get cpus                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │                     │
	│ tunnel  │ functional-525396 tunnel --alsologtostderr                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │                     │
	│ tunnel  │ functional-525396 tunnel --alsologtostderr                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │                     │
	│ tunnel  │ functional-525396 tunnel --alsologtostderr                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:50 UTC │                     │
	│ addons  │ functional-525396 addons list                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ addons  │ functional-525396 addons list -o json                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:38:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:38:25.865142  832221 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:38:25.865266  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865270  832221 out.go:374] Setting ErrFile to fd 2...
	I1208 00:38:25.865273  832221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:38:25.865522  832221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:38:25.865905  832221 out.go:368] Setting JSON to false
	I1208 00:38:25.866798  832221 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":19238,"bootTime":1765135068,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:38:25.866898  832221 start.go:143] virtualization:  
	I1208 00:38:25.870446  832221 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:38:25.873443  832221 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:38:25.873527  832221 notify.go:221] Checking for updates...
	I1208 00:38:25.877177  832221 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:38:25.880254  832221 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:38:25.883080  832221 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:38:25.885867  832221 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:38:25.888710  832221 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:38:25.892134  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:25.892227  832221 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:38:25.926814  832221 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:38:25.926949  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:25.982933  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:25.973301038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:25.983053  832221 docker.go:319] overlay module found
	I1208 00:38:25.986144  832221 out.go:179] * Using the docker driver based on existing profile
	I1208 00:38:25.988897  832221 start.go:309] selected driver: docker
	I1208 00:38:25.988906  832221 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:25.989004  832221 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:38:25.989104  832221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:38:26.085905  832221 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:38:26.075169003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:38:26.086340  832221 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:38:26.086364  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:26.086419  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:26.086463  832221 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:26.089599  832221 out.go:179] * Starting "functional-525396" primary control-plane node in "functional-525396" cluster
	I1208 00:38:26.092632  832221 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:38:26.095593  832221 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:38:26.098465  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:26.098511  832221 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:38:26.098512  832221 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:38:26.098520  832221 cache.go:65] Caching tarball of preloaded images
	I1208 00:38:26.098640  832221 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 00:38:26.098648  832221 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 00:38:26.098767  832221 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/config.json ...
	I1208 00:38:26.118762  832221 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:38:26.118779  832221 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:38:26.118798  832221 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:38:26.118832  832221 start.go:360] acquireMachinesLock for functional-525396: {Name:mk7eeab2b5b24a7b92f82c9641daa3902250867b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:38:26.118982  832221 start.go:364] duration metric: took 72.616µs to acquireMachinesLock for "functional-525396"
	I1208 00:38:26.119001  832221 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:38:26.119005  832221 fix.go:54] fixHost starting: 
	I1208 00:38:26.119276  832221 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
	I1208 00:38:26.135702  832221 fix.go:112] recreateIfNeeded on functional-525396: state=Running err=<nil>
	W1208 00:38:26.135737  832221 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:38:26.138942  832221 out.go:252] * Updating the running docker "functional-525396" container ...
	I1208 00:38:26.138968  832221 machine.go:94] provisionDockerMachine start ...
	I1208 00:38:26.139048  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.156040  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.156360  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.156366  832221 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:38:26.306195  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.306209  832221 ubuntu.go:182] provisioning hostname "functional-525396"
	I1208 00:38:26.306278  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.323547  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.323853  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.323861  832221 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-525396 && echo "functional-525396" | sudo tee /etc/hostname
	I1208 00:38:26.483358  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-525396
	
	I1208 00:38:26.483423  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.500892  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:26.501201  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:26.501214  832221 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-525396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-525396/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-525396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:38:26.651219  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:38:26.651236  832221 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 00:38:26.651262  832221 ubuntu.go:190] setting up certificates
	I1208 00:38:26.651269  832221 provision.go:84] configureAuth start
	I1208 00:38:26.651330  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:26.668935  832221 provision.go:143] copyHostCerts
	I1208 00:38:26.669007  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 00:38:26.669020  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 00:38:26.669092  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 00:38:26.669226  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 00:38:26.669232  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 00:38:26.669258  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 00:38:26.669316  832221 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 00:38:26.669319  832221 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 00:38:26.669351  832221 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 00:38:26.669396  832221 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.functional-525396 san=[127.0.0.1 192.168.49.2 functional-525396 localhost minikube]
	I1208 00:38:26.882878  832221 provision.go:177] copyRemoteCerts
	I1208 00:38:26.882932  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:38:26.882976  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:26.900195  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.008298  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:38:27.026654  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:38:27.044245  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:38:27.061828  832221 provision.go:87] duration metric: took 410.535167ms to configureAuth
	I1208 00:38:27.061847  832221 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:38:27.062049  832221 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:38:27.062144  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.079069  832221 main.go:143] libmachine: Using SSH client type: native
	I1208 00:38:27.079387  832221 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1208 00:38:27.079399  832221 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 00:38:27.403353  832221 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 00:38:27.403368  832221 machine.go:97] duration metric: took 1.264393629s to provisionDockerMachine
	I1208 00:38:27.403378  832221 start.go:293] postStartSetup for "functional-525396" (driver="docker")
	I1208 00:38:27.403389  832221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:38:27.403457  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:38:27.403520  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.422294  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.531362  832221 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:38:27.534870  832221 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:38:27.534888  832221 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:38:27.534898  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 00:38:27.534950  832221 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 00:38:27.535028  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 00:38:27.535101  832221 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts -> hosts in /etc/test/nested/copy/791807
	I1208 00:38:27.535142  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/791807
	I1208 00:38:27.543303  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:27.561264  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts --> /etc/test/nested/copy/791807/hosts (40 bytes)
	I1208 00:38:27.579215  832221 start.go:296] duration metric: took 175.824145ms for postStartSetup
	I1208 00:38:27.579284  832221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:38:27.579329  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.597098  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.699502  832221 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:38:27.703953  832221 fix.go:56] duration metric: took 1.584940995s for fixHost
	I1208 00:38:27.703967  832221 start.go:83] releasing machines lock for "functional-525396", held for 1.584978296s
	I1208 00:38:27.704034  832221 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-525396
	I1208 00:38:27.720794  832221 ssh_runner.go:195] Run: cat /version.json
	I1208 00:38:27.720838  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.721083  832221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:38:27.721126  832221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
	I1208 00:38:27.740766  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.744839  832221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
	I1208 00:38:27.842382  832221 ssh_runner.go:195] Run: systemctl --version
	I1208 00:38:27.933498  832221 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 00:38:27.969664  832221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:38:27.973926  832221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:38:27.973991  832221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:38:27.981670  832221 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:38:27.981684  832221 start.go:496] detecting cgroup driver to use...
	I1208 00:38:27.981714  832221 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:38:27.981757  832221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 00:38:27.996930  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 00:38:28.011523  832221 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:38:28.011601  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:38:28.029696  832221 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:38:28.043991  832221 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:38:28.162184  832221 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:38:28.302345  832221 docker.go:234] disabling docker service ...
	I1208 00:38:28.302409  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:38:28.316944  832221 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:38:28.329323  832221 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:38:28.471674  832221 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:38:28.594617  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:38:28.607360  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:38:28.621958  832221 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 00:38:28.622014  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.631486  832221 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 00:38:28.631544  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.641093  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.650549  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.660155  832221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:38:28.667958  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.676952  832221 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.685235  832221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 00:38:28.693630  832221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:38:28.701133  832221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:38:28.708624  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:28.814162  832221 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 00:38:28.986282  832221 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 00:38:28.986346  832221 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 00:38:28.991517  832221 start.go:564] Will wait 60s for crictl version
	I1208 00:38:28.991573  832221 ssh_runner.go:195] Run: which crictl
	I1208 00:38:28.995534  832221 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:38:29.025912  832221 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 00:38:29.025997  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.062279  832221 ssh_runner.go:195] Run: crio --version
	I1208 00:38:29.096298  832221 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 00:38:29.099065  832221 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:38:29.116028  832221 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:38:29.122672  832221 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:38:29.125488  832221 kubeadm.go:884] updating cluster {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:38:29.125636  832221 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:38:29.125706  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.164815  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.164827  832221 crio.go:433] Images already preloaded, skipping extraction
	I1208 00:38:29.164879  832221 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:38:29.195499  832221 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 00:38:29.195511  832221 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:38:29.195518  832221 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1208 00:38:29.195647  832221 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-525396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:38:29.195726  832221 ssh_runner.go:195] Run: crio config
	I1208 00:38:29.250138  832221 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:38:29.250159  832221 cni.go:84] Creating CNI manager for ""
	I1208 00:38:29.250168  832221 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:38:29.250181  832221 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:38:29.250206  832221 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-525396 NodeName:functional-525396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:38:29.250329  832221 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-525396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:38:29.250397  832221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:38:29.258150  832221 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:38:29.258234  832221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:38:29.265694  832221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 00:38:29.278151  832221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:38:29.290865  832221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1208 00:38:29.303277  832221 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:38:29.306745  832221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:38:29.413867  832221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:38:29.757020  832221 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396 for IP: 192.168.49.2
	I1208 00:38:29.757040  832221 certs.go:195] generating shared ca certs ...
	I1208 00:38:29.757055  832221 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:38:29.757227  832221 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 00:38:29.757282  832221 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 00:38:29.757288  832221 certs.go:257] generating profile certs ...
	I1208 00:38:29.757406  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.key
	I1208 00:38:29.757463  832221 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key.7790121c
	I1208 00:38:29.757516  832221 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key
	I1208 00:38:29.757642  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 00:38:29.757680  832221 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 00:38:29.757687  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 00:38:29.757715  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:38:29.757753  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:38:29.757774  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 00:38:29.757826  832221 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 00:38:29.761393  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:38:29.783882  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:38:29.803461  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:38:29.822714  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:38:29.839981  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:38:29.857351  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 00:38:29.874240  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:38:29.890650  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:38:29.906746  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 00:38:29.924059  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 00:38:29.940748  832221 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:38:29.958110  832221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:38:29.970093  832221 ssh_runner.go:195] Run: openssl version
	I1208 00:38:29.976075  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.983124  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 00:38:29.990594  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994143  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 00:38:29.994197  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 00:38:30.038336  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:38:30.048261  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.057929  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 00:38:30.067406  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072044  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.072104  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 00:38:30.114205  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:38:30.122367  832221 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.130206  832221 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:38:30.138222  832221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142205  832221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.142264  832221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:38:30.188681  832221 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:38:30.197066  832221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:38:30.201256  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:38:30.247635  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:38:30.290467  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:38:30.332415  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:38:30.373141  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:38:30.413979  832221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:38:30.454763  832221 kubeadm.go:401] StartCluster: {Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:38:30.454864  832221 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 00:38:30.454938  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.481225  832221 cri.go:89] found id: ""
	I1208 00:38:30.481285  832221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:38:30.488799  832221 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:38:30.488808  832221 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:38:30.488859  832221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:38:30.495821  832221 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.496331  832221 kubeconfig.go:125] found "functional-525396" server: "https://192.168.49.2:8441"
	I1208 00:38:30.497560  832221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:38:30.505232  832221 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:23:53.462513047 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:38:29.298599774 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:38:30.505258  832221 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:38:30.505269  832221 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 00:38:30.505341  832221 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:38:30.544576  832221 cri.go:89] found id: ""
	I1208 00:38:30.544636  832221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:38:30.564190  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:38:30.571945  832221 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  8 00:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  8 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  8 00:28 /etc/kubernetes/scheduler.conf
	
	I1208 00:38:30.572003  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:38:30.579767  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:38:30.588961  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.589038  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:38:30.596275  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.604001  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.604058  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:38:30.611049  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:38:30.618317  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:38:30.618369  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:38:30.625673  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:38:30.633203  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:30.679020  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.303260  832221 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.624214812s)
	I1208 00:38:32.303321  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.499121  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.557405  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:38:32.605845  832221 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:38:32.605924  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.106778  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:33.606873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.106818  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:34.606134  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.106245  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.607017  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.106011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:36.606401  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.106569  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:37.606153  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.106367  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.605995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.106910  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:39.606698  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:40.606687  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.106589  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.606067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.106823  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:42.606794  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.106122  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:43.606931  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.106765  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.606092  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.107046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:45.606088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.106757  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.606004  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.106996  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:47.606590  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.106432  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:48.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.106745  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.606390  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.106196  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:50.606618  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.106064  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:51.606867  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.106995  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.606766  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.106131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:53.606779  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.106290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:54.606219  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.106089  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.607007  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.106717  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:56.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.106475  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:57.607046  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.106582  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.606125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.107067  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:59.606667  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.106461  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:00.606353  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.106471  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.606654  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.107110  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:02.607006  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.106780  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:03.606382  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.106088  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.606332  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.106060  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:05.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.106803  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:06.606107  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.106414  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.606178  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.106868  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:08.606030  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.106375  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:09.606102  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.107011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.606304  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.106108  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:11.606093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.106096  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:12.606827  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.606893  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.107045  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:14.606816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.106126  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:15.606899  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.106572  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.606111  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.106384  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:17.606103  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.106801  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:18.606703  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.106595  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.606139  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.106918  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:20.606350  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.106147  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:21.606821  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.106994  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.606129  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.106114  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:23.606499  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.106132  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:24.606921  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.106736  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.606121  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.106425  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:26.606155  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.106763  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:27.606883  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.106058  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.606943  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.106991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:29.606966  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.106181  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:30.606342  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.106653  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.606117  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.106026  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:32.606138  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:32.606213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:32.631935  832221 cri.go:89] found id: ""
	I1208 00:39:32.631949  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.631956  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:32.631962  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:32.632027  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:32.657240  832221 cri.go:89] found id: ""
	I1208 00:39:32.657260  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.657267  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:32.657273  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:32.657332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:32.686247  832221 cri.go:89] found id: ""
	I1208 00:39:32.686261  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.686269  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:32.686274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:32.686334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:32.712330  832221 cri.go:89] found id: ""
	I1208 00:39:32.712345  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.712352  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:32.712358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:32.712416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:32.738663  832221 cri.go:89] found id: ""
	I1208 00:39:32.738678  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.738685  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:32.738690  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:32.738755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:32.765710  832221 cri.go:89] found id: ""
	I1208 00:39:32.765725  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.765731  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:32.765737  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:32.765792  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:32.791480  832221 cri.go:89] found id: ""
	I1208 00:39:32.791494  832221 logs.go:282] 0 containers: []
	W1208 00:39:32.791501  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:32.791509  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:32.791520  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:32.856630  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:32.856654  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:32.873574  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:32.873591  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:32.937953  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:32.928926   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.929752   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931252   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.931782   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:32.933524   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:32.937966  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:32.937977  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:33.008749  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:33.008776  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.542093  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:35.553517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:35.553575  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:35.584212  832221 cri.go:89] found id: ""
	I1208 00:39:35.584226  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.584233  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:35.584238  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:35.584296  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:35.615871  832221 cri.go:89] found id: ""
	I1208 00:39:35.615885  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.615892  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:35.615897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:35.615954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:35.641597  832221 cri.go:89] found id: ""
	I1208 00:39:35.641611  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.641618  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:35.641623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:35.641683  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:35.667538  832221 cri.go:89] found id: ""
	I1208 00:39:35.667551  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.667567  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:35.667572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:35.667633  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:35.696105  832221 cri.go:89] found id: ""
	I1208 00:39:35.696118  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.696124  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:35.696130  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:35.696187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:35.725150  832221 cri.go:89] found id: ""
	I1208 00:39:35.725165  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.725172  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:35.725178  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:35.725236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:35.752762  832221 cri.go:89] found id: ""
	I1208 00:39:35.752776  832221 logs.go:282] 0 containers: []
	W1208 00:39:35.752783  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:35.752791  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:35.752801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:35.780454  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:35.780471  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:35.846096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:35.846118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:35.863081  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:35.863098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:35.932235  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:35.923881   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.924549   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926219   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.926824   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:35.928355   11137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:35.932246  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:35.932259  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.502146  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:38.514634  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:38.514691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:38.548208  832221 cri.go:89] found id: ""
	I1208 00:39:38.548223  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.548230  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:38.548235  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:38.548305  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:38.579066  832221 cri.go:89] found id: ""
	I1208 00:39:38.579080  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.579087  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:38.579092  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:38.579154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:38.605928  832221 cri.go:89] found id: ""
	I1208 00:39:38.605942  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.605949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:38.605954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:38.606013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:38.631317  832221 cri.go:89] found id: ""
	I1208 00:39:38.631332  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.631339  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:38.631350  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:38.631410  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:38.657581  832221 cri.go:89] found id: ""
	I1208 00:39:38.657595  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.657602  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:38.657607  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:38.657664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:38.688104  832221 cri.go:89] found id: ""
	I1208 00:39:38.688118  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.688125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:38.688131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:38.688191  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:38.712900  832221 cri.go:89] found id: ""
	I1208 00:39:38.712914  832221 logs.go:282] 0 containers: []
	W1208 00:39:38.712921  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:38.712929  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:38.712939  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:38.782215  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:38.782236  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:38.813188  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:38.813203  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:38.882554  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:38.882574  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:38.899573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:38.899590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:38.963587  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:38.955568   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.956072   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.957724   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.958210   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:38.959707   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.464816  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:41.476933  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:41.476994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:41.519038  832221 cri.go:89] found id: ""
	I1208 00:39:41.519052  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.519059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:41.519065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:41.519120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:41.549931  832221 cri.go:89] found id: ""
	I1208 00:39:41.549946  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.549953  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:41.549958  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:41.550016  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:41.579952  832221 cri.go:89] found id: ""
	I1208 00:39:41.579966  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.579973  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:41.579978  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:41.580038  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:41.609851  832221 cri.go:89] found id: ""
	I1208 00:39:41.609865  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.609873  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:41.609878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:41.609940  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:41.635896  832221 cri.go:89] found id: ""
	I1208 00:39:41.635910  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.635917  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:41.635923  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:41.635986  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:41.662056  832221 cri.go:89] found id: ""
	I1208 00:39:41.662083  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.662091  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:41.662097  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:41.662170  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:41.687327  832221 cri.go:89] found id: ""
	I1208 00:39:41.687342  832221 logs.go:282] 0 containers: []
	W1208 00:39:41.687349  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:41.687357  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:41.687367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:41.753129  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:41.753148  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:41.769911  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:41.769927  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:41.838088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:41.829386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.829964   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.831698   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.832336   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:41.834090   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:41.838099  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:41.838111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:41.910629  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:41.910651  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:44.440476  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:44.450677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:44.450737  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:44.477661  832221 cri.go:89] found id: ""
	I1208 00:39:44.477674  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.477681  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:44.477687  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:44.477754  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:44.502810  832221 cri.go:89] found id: ""
	I1208 00:39:44.502824  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.502831  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:44.502836  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:44.502922  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:44.536158  832221 cri.go:89] found id: ""
	I1208 00:39:44.536171  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.536178  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:44.536187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:44.536245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:44.569819  832221 cri.go:89] found id: ""
	I1208 00:39:44.569832  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.569839  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:44.569844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:44.569900  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:44.596822  832221 cri.go:89] found id: ""
	I1208 00:39:44.596837  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.596844  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:44.596849  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:44.596909  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:44.626118  832221 cri.go:89] found id: ""
	I1208 00:39:44.626132  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.626139  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:44.626159  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:44.626220  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:44.651327  832221 cri.go:89] found id: ""
	I1208 00:39:44.651341  832221 logs.go:282] 0 containers: []
	W1208 00:39:44.651348  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:44.651356  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:44.651366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:44.717153  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:44.717174  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:44.734169  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:44.734200  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:44.800240  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:44.790893   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.791794   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793386   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.793938   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:44.795621   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:44.800252  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:44.800263  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:44.873699  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:44.873729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.404232  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:47.415493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:47.415558  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:47.442934  832221 cri.go:89] found id: ""
	I1208 00:39:47.442948  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.442955  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:47.442961  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:47.443025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:47.468072  832221 cri.go:89] found id: ""
	I1208 00:39:47.468086  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.468093  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:47.468099  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:47.468169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:47.499439  832221 cri.go:89] found id: ""
	I1208 00:39:47.499452  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.499460  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:47.499465  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:47.499522  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:47.525160  832221 cri.go:89] found id: ""
	I1208 00:39:47.525173  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.525180  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:47.525186  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:47.525261  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:47.557881  832221 cri.go:89] found id: ""
	I1208 00:39:47.557902  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.557909  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:47.557915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:47.557973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:47.585993  832221 cri.go:89] found id: ""
	I1208 00:39:47.586006  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.586013  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:47.586018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:47.586074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:47.611544  832221 cri.go:89] found id: ""
	I1208 00:39:47.611559  832221 logs.go:282] 0 containers: []
	W1208 00:39:47.611565  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:47.611573  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:47.611594  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:47.673948  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:47.665109   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.665997   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667624   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.667917   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:47.669389   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:47.673960  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:47.673971  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:47.746050  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:47.746071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:47.778206  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:47.778228  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:47.843769  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:47.843788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.361131  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:50.373118  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:50.373178  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:50.402177  832221 cri.go:89] found id: ""
	I1208 00:39:50.402192  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.402199  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:50.402204  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:50.402262  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:50.428277  832221 cri.go:89] found id: ""
	I1208 00:39:50.428291  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.428298  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:50.428303  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:50.428361  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:50.453780  832221 cri.go:89] found id: ""
	I1208 00:39:50.453793  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.453801  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:50.453806  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:50.453867  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:50.478816  832221 cri.go:89] found id: ""
	I1208 00:39:50.478830  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.478838  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:50.478887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:50.478952  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:50.506494  832221 cri.go:89] found id: ""
	I1208 00:39:50.506508  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.506516  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:50.506523  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:50.506581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:50.548254  832221 cri.go:89] found id: ""
	I1208 00:39:50.548267  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.548275  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:50.548289  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:50.548345  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:50.580999  832221 cri.go:89] found id: ""
	I1208 00:39:50.581013  832221 logs.go:282] 0 containers: []
	W1208 00:39:50.581020  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:50.581028  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:50.581038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:50.646872  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:50.646894  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:50.663705  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:50.663722  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:50.731208  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:50.722671   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.723587   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725324   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.725819   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:50.727307   11649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:50.731220  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:50.731231  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:50.800530  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:50.800552  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:53.328838  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:53.338798  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:53.338876  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:53.364078  832221 cri.go:89] found id: ""
	I1208 00:39:53.364093  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.364100  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:53.364106  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:53.364165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:53.389870  832221 cri.go:89] found id: ""
	I1208 00:39:53.389884  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.389891  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:53.389897  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:53.389955  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:53.415578  832221 cri.go:89] found id: ""
	I1208 00:39:53.415592  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.415600  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:53.415606  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:53.415664  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:53.440749  832221 cri.go:89] found id: ""
	I1208 00:39:53.440763  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.440769  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:53.440775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:53.440837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:53.469528  832221 cri.go:89] found id: ""
	I1208 00:39:53.469542  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.469550  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:53.469555  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:53.469614  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:53.494205  832221 cri.go:89] found id: ""
	I1208 00:39:53.494219  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.494225  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:53.494231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:53.494286  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:53.536734  832221 cri.go:89] found id: ""
	I1208 00:39:53.536748  832221 logs.go:282] 0 containers: []
	W1208 00:39:53.536755  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:53.536763  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:53.536773  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:53.608590  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:53.608610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:53.625117  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:53.625134  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:53.687237  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:53.678561   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.679227   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.680923   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.681488   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:53.683062   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:53.687248  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:53.687258  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:53.755459  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:53.755480  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.290756  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:56.302211  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:56.302272  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:56.327085  832221 cri.go:89] found id: ""
	I1208 00:39:56.327098  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.327105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:56.327110  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:56.327165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:56.351553  832221 cri.go:89] found id: ""
	I1208 00:39:56.351567  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.351574  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:56.351579  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:56.351636  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:56.375432  832221 cri.go:89] found id: ""
	I1208 00:39:56.375445  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.375451  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:56.375456  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:56.375513  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:56.399254  832221 cri.go:89] found id: ""
	I1208 00:39:56.399267  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.399274  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:56.399282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:56.399337  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:56.424239  832221 cri.go:89] found id: ""
	I1208 00:39:56.424253  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.424260  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:56.424265  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:56.424322  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:56.447970  832221 cri.go:89] found id: ""
	I1208 00:39:56.447983  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.447990  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:56.447996  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:56.448059  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:56.480639  832221 cri.go:89] found id: ""
	I1208 00:39:56.480652  832221 logs.go:282] 0 containers: []
	W1208 00:39:56.480659  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:56.480666  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:56.480680  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:56.514333  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:56.514349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:56.587248  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:56.587268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:56.604138  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:56.604156  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:56.667583  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:56.659097   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.659664   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661372   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.661868   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:56.663527   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:56.667593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:56.667605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.236478  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:59.246590  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:59.246653  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:59.274726  832221 cri.go:89] found id: ""
	I1208 00:39:59.274739  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.274746  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:59.274752  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:39:59.274816  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:59.302946  832221 cri.go:89] found id: ""
	I1208 00:39:59.302960  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.302967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:39:59.302972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:39:59.303036  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:59.328486  832221 cri.go:89] found id: ""
	I1208 00:39:59.328510  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.328517  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:39:59.328522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:59.328583  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:59.354620  832221 cri.go:89] found id: ""
	I1208 00:39:59.354638  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.354645  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:59.354651  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:59.354722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:59.379131  832221 cri.go:89] found id: ""
	I1208 00:39:59.379145  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.379152  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:59.379157  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:59.379221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:59.407900  832221 cri.go:89] found id: ""
	I1208 00:39:59.407915  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.407921  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:59.407930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:59.407999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:59.432790  832221 cri.go:89] found id: ""
	I1208 00:39:59.432804  832221 logs.go:282] 0 containers: []
	W1208 00:39:59.432811  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:59.432819  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:59.432829  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:59.498500  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:59.498521  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:59.517843  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:59.517860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:59.592346  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:59.584344   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.584768   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586377   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.586970   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:59.588434   11968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:59.592356  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:39:59.592366  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:39:59.660798  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:39:59.660821  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.193318  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:02.204389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:02.204452  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:02.233248  832221 cri.go:89] found id: ""
	I1208 00:40:02.233262  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.233272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:02.233277  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:02.233338  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:02.259542  832221 cri.go:89] found id: ""
	I1208 00:40:02.259555  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.259562  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:02.259567  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:02.259626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:02.284406  832221 cri.go:89] found id: ""
	I1208 00:40:02.284421  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.284428  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:02.284433  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:02.284492  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:02.314792  832221 cri.go:89] found id: ""
	I1208 00:40:02.314807  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.314815  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:02.314820  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:02.314902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:02.345720  832221 cri.go:89] found id: ""
	I1208 00:40:02.345735  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.345742  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:02.345748  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:02.345806  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:02.374260  832221 cri.go:89] found id: ""
	I1208 00:40:02.374275  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.374282  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:02.374288  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:02.374356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:02.401424  832221 cri.go:89] found id: ""
	I1208 00:40:02.401448  832221 logs.go:282] 0 containers: []
	W1208 00:40:02.401456  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:02.401464  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:02.401477  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:02.418749  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:02.418772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:02.488580  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:02.480395   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.481083   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.482578   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.483112   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:02.484782   12067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:02.488593  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:02.488605  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:02.561942  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:02.561963  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:02.594984  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:02.595001  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.164061  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:05.174102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:05.174162  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:05.200676  832221 cri.go:89] found id: ""
	I1208 00:40:05.200690  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.200697  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:05.200702  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:05.200762  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:05.229843  832221 cri.go:89] found id: ""
	I1208 00:40:05.229857  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.229864  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:05.229869  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:05.229923  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:05.254905  832221 cri.go:89] found id: ""
	I1208 00:40:05.254919  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.254926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:05.254930  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:05.254989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:05.284106  832221 cri.go:89] found id: ""
	I1208 00:40:05.284120  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.284127  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:05.284132  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:05.284197  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:05.308626  832221 cri.go:89] found id: ""
	I1208 00:40:05.308640  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.308647  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:05.308652  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:05.308714  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:05.337161  832221 cri.go:89] found id: ""
	I1208 00:40:05.337175  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.337182  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:05.337187  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:05.337268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:05.362077  832221 cri.go:89] found id: ""
	I1208 00:40:05.362091  832221 logs.go:282] 0 containers: []
	W1208 00:40:05.362098  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:05.362105  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:05.362116  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:05.428096  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:05.428115  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:05.445139  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:05.445161  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:05.507290  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:05.497084   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.497893   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.499577   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.500019   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:05.501556   12175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:05.507310  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:05.507321  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:05.586340  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:05.586361  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.118998  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:08.129512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:08.129588  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:08.156251  832221 cri.go:89] found id: ""
	I1208 00:40:08.156265  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.156272  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:08.156278  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:08.156344  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:08.183906  832221 cri.go:89] found id: ""
	I1208 00:40:08.183919  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.183926  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:08.183931  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:08.183987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:08.210358  832221 cri.go:89] found id: ""
	I1208 00:40:08.210372  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.210379  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:08.210384  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:08.210442  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:08.235462  832221 cri.go:89] found id: ""
	I1208 00:40:08.235476  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.235483  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:08.235489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:08.235544  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:08.261687  832221 cri.go:89] found id: ""
	I1208 00:40:08.261700  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.261707  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:08.261713  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:08.261771  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:08.285826  832221 cri.go:89] found id: ""
	I1208 00:40:08.285842  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.285849  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:08.285854  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:08.285912  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:08.312132  832221 cri.go:89] found id: ""
	I1208 00:40:08.312146  832221 logs.go:282] 0 containers: []
	W1208 00:40:08.312153  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:08.312161  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:08.312171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:08.380160  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:08.371459   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.372004   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.373773   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.374174   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:08.375669   12277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:08.380177  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:08.380187  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:08.455282  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:08.455305  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:08.490186  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:08.490207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:08.563751  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:08.563779  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.082398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:11.092581  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:11.092642  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:11.118553  832221 cri.go:89] found id: ""
	I1208 00:40:11.118568  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.118575  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:11.118580  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:11.118638  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:11.144055  832221 cri.go:89] found id: ""
	I1208 00:40:11.144070  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.144077  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:11.144082  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:11.144144  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:11.169906  832221 cri.go:89] found id: ""
	I1208 00:40:11.169919  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.169926  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:11.169931  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:11.169988  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:11.197596  832221 cri.go:89] found id: ""
	I1208 00:40:11.197610  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.197617  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:11.197623  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:11.197681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:11.223606  832221 cri.go:89] found id: ""
	I1208 00:40:11.223624  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.223631  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:11.223636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:11.223693  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:11.248818  832221 cri.go:89] found id: ""
	I1208 00:40:11.248832  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.248838  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:11.248844  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:11.248902  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:11.273540  832221 cri.go:89] found id: ""
	I1208 00:40:11.273554  832221 logs.go:282] 0 containers: []
	W1208 00:40:11.273561  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:11.273568  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:11.273579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:11.338706  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:11.338726  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:11.357554  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:11.357571  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:11.420756  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:11.412144   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.412763   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.414526   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.415091   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:11.416860   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:11.420767  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:11.420788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:11.489139  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:11.489157  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.024714  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:14.035808  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:14.035873  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:14.061793  832221 cri.go:89] found id: ""
	I1208 00:40:14.061807  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.061814  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:14.061819  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:14.061875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:14.090633  832221 cri.go:89] found id: ""
	I1208 00:40:14.090647  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.090654  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:14.090661  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:14.090719  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:14.115546  832221 cri.go:89] found id: ""
	I1208 00:40:14.115560  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.115567  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:14.115572  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:14.115629  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:14.141065  832221 cri.go:89] found id: ""
	I1208 00:40:14.141079  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.141086  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:14.141091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:14.141154  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:14.165799  832221 cri.go:89] found id: ""
	I1208 00:40:14.165814  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.165821  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:14.165826  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:14.165886  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:14.195480  832221 cri.go:89] found id: ""
	I1208 00:40:14.195494  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.195501  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:14.195506  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:14.195564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:14.220362  832221 cri.go:89] found id: ""
	I1208 00:40:14.220377  832221 logs.go:282] 0 containers: []
	W1208 00:40:14.220384  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:14.220392  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:14.220405  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:14.287292  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:14.279139   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.279945   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281541   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.281827   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:14.283399   12489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:14.287303  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:14.287313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:14.356018  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:14.356038  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:14.387237  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:14.387253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:14.454492  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:14.454512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:16.972125  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:16.982309  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:16.982372  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:17.017693  832221 cri.go:89] found id: ""
	I1208 00:40:17.017706  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.017714  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:17.017719  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:17.017778  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:17.044376  832221 cri.go:89] found id: ""
	I1208 00:40:17.044391  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.044399  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:17.044404  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:17.044473  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:17.070587  832221 cri.go:89] found id: ""
	I1208 00:40:17.070601  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.070608  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:17.070613  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:17.070672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:17.095978  832221 cri.go:89] found id: ""
	I1208 00:40:17.095992  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.095999  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:17.096004  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:17.096062  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:17.122135  832221 cri.go:89] found id: ""
	I1208 00:40:17.122149  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.122156  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:17.122161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:17.122221  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:17.148103  832221 cri.go:89] found id: ""
	I1208 00:40:17.148118  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.148125  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:17.148131  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:17.148192  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:17.172943  832221 cri.go:89] found id: ""
	I1208 00:40:17.172957  832221 logs.go:282] 0 containers: []
	W1208 00:40:17.172964  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:17.172971  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:17.172982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:17.238368  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:17.238387  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:17.255667  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:17.255685  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:17.321644  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:17.313285   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.313959   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.315591   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.316271   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:17.317925   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:17.321656  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:17.321667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:17.394476  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:17.394498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:19.927345  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:19.939629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:19.939691  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:19.965406  832221 cri.go:89] found id: ""
	I1208 00:40:19.965420  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.965427  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:19.965432  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:19.965500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:19.992009  832221 cri.go:89] found id: ""
	I1208 00:40:19.992023  832221 logs.go:282] 0 containers: []
	W1208 00:40:19.992030  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:19.992035  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:19.992098  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:20.029302  832221 cri.go:89] found id: ""
	I1208 00:40:20.029317  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.029324  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:20.029330  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:20.029399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:20.058056  832221 cri.go:89] found id: ""
	I1208 00:40:20.058071  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.058085  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:20.058091  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:20.058165  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:20.084189  832221 cri.go:89] found id: ""
	I1208 00:40:20.084203  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.084211  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:20.084216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:20.084291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:20.111361  832221 cri.go:89] found id: ""
	I1208 00:40:20.111376  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.111383  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:20.111389  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:20.111449  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:20.141805  832221 cri.go:89] found id: ""
	I1208 00:40:20.141819  832221 logs.go:282] 0 containers: []
	W1208 00:40:20.141826  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:20.141834  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:20.141844  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:20.169490  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:20.169506  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:20.234965  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:20.234985  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:20.252060  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:20.252078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:20.320257  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:20.311257   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.311721   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.313608   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.314307   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:20.315929   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:20.320267  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:20.320280  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:22.888858  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:22.899382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:22.899447  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:22.924604  832221 cri.go:89] found id: ""
	I1208 00:40:22.924619  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.924625  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:22.924631  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:22.924698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:22.955239  832221 cri.go:89] found id: ""
	I1208 00:40:22.955253  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.955259  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:22.955264  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:22.955323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:22.981222  832221 cri.go:89] found id: ""
	I1208 00:40:22.981237  832221 logs.go:282] 0 containers: []
	W1208 00:40:22.981244  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:22.981250  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:22.981317  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:23.011070  832221 cri.go:89] found id: ""
	I1208 00:40:23.011085  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.011092  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:23.011098  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:23.011169  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:23.038240  832221 cri.go:89] found id: ""
	I1208 00:40:23.038255  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.038263  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:23.038268  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:23.038329  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:23.068452  832221 cri.go:89] found id: ""
	I1208 00:40:23.068466  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.068473  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:23.068479  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:23.068536  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:23.094006  832221 cri.go:89] found id: ""
	I1208 00:40:23.094020  832221 logs.go:282] 0 containers: []
	W1208 00:40:23.094027  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:23.094035  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:23.094047  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:23.160498  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:23.160517  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:23.177630  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:23.177647  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:23.241245  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:23.232409   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.233267   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.234957   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.235597   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:23.237234   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:23.241256  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:23.241268  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:23.310140  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:23.310159  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:25.838645  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:25.849038  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:25.849104  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:25.876484  832221 cri.go:89] found id: ""
	I1208 00:40:25.876499  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.876506  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:25.876512  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:25.876574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:25.906565  832221 cri.go:89] found id: ""
	I1208 00:40:25.906579  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.906587  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:25.906592  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:25.906649  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:25.937448  832221 cri.go:89] found id: ""
	I1208 00:40:25.937463  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.937471  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:25.937476  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:25.937537  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:25.966528  832221 cri.go:89] found id: ""
	I1208 00:40:25.966542  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.966549  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:25.966554  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:25.966609  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:25.993465  832221 cri.go:89] found id: ""
	I1208 00:40:25.993480  832221 logs.go:282] 0 containers: []
	W1208 00:40:25.993487  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:25.993493  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:25.993554  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:26.022155  832221 cri.go:89] found id: ""
	I1208 00:40:26.022168  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.022175  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:26.022181  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:26.022239  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:26.049049  832221 cri.go:89] found id: ""
	I1208 00:40:26.049064  832221 logs.go:282] 0 containers: []
	W1208 00:40:26.049072  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:26.049087  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:26.049098  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:26.119386  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:26.119406  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:26.155712  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:26.155729  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:26.223788  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:26.223809  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:26.245587  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:26.245610  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:26.309129  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:26.301420   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.302011   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303501   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.303823   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:26.305308   12926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:28.809355  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:28.819547  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:28.819610  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:28.849672  832221 cri.go:89] found id: ""
	I1208 00:40:28.849687  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.849694  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:28.849700  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:28.849760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:28.880748  832221 cri.go:89] found id: ""
	I1208 00:40:28.880763  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.880769  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:28.880774  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:28.880837  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:28.908198  832221 cri.go:89] found id: ""
	I1208 00:40:28.908212  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.908219  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:28.908224  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:28.908282  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:28.933130  832221 cri.go:89] found id: ""
	I1208 00:40:28.933144  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.933151  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:28.933156  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:28.933222  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:28.964126  832221 cri.go:89] found id: ""
	I1208 00:40:28.964140  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.964147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:28.964153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:28.964210  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:28.990484  832221 cri.go:89] found id: ""
	I1208 00:40:28.990499  832221 logs.go:282] 0 containers: []
	W1208 00:40:28.990506  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:28.990512  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:28.990573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:29.017806  832221 cri.go:89] found id: ""
	I1208 00:40:29.017820  832221 logs.go:282] 0 containers: []
	W1208 00:40:29.017828  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:29.017835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:29.017847  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:29.084613  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:29.084635  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:29.101973  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:29.101992  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:29.173921  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:29.165480   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.166207   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.167898   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.168382   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:29.170117   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:29.173933  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:29.173944  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:29.240893  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:29.240915  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:31.777057  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:31.790721  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:31.790788  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:31.822768  832221 cri.go:89] found id: ""
	I1208 00:40:31.822783  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.822790  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:31.822795  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:31.822969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:31.848644  832221 cri.go:89] found id: ""
	I1208 00:40:31.848657  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.848672  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:31.848678  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:31.848745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:31.874088  832221 cri.go:89] found id: ""
	I1208 00:40:31.874101  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.874117  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:31.874123  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:31.874179  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:31.899211  832221 cri.go:89] found id: ""
	I1208 00:40:31.899234  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.899242  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:31.899247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:31.899316  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:31.924268  832221 cri.go:89] found id: ""
	I1208 00:40:31.924282  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.924290  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:31.924295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:31.924355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:31.950349  832221 cri.go:89] found id: ""
	I1208 00:40:31.950363  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.950370  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:31.950376  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:31.950433  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:31.979825  832221 cri.go:89] found id: ""
	I1208 00:40:31.979848  832221 logs.go:282] 0 containers: []
	W1208 00:40:31.979856  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:31.979864  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:31.979875  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:32.045728  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:32.045748  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:32.062977  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:32.062995  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:32.127567  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:32.118954   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.119787   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121417   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.121931   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:32.123478   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:32.127579  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:32.127590  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:32.195761  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:32.195782  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:34.725887  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:34.742661  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:34.742722  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:34.778651  832221 cri.go:89] found id: ""
	I1208 00:40:34.778665  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.778672  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:34.778678  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:34.778736  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:34.811974  832221 cri.go:89] found id: ""
	I1208 00:40:34.811988  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.811995  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:34.812000  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:34.812057  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:34.844697  832221 cri.go:89] found id: ""
	I1208 00:40:34.844712  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.844719  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:34.844725  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:34.844782  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:34.872482  832221 cri.go:89] found id: ""
	I1208 00:40:34.872495  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.872502  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:34.872509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:34.872564  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:34.898220  832221 cri.go:89] found id: ""
	I1208 00:40:34.898235  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.898242  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:34.898247  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:34.898308  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:34.925442  832221 cri.go:89] found id: ""
	I1208 00:40:34.925457  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.925464  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:34.925470  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:34.925527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:34.952326  832221 cri.go:89] found id: ""
	I1208 00:40:34.952340  832221 logs.go:282] 0 containers: []
	W1208 00:40:34.952347  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:34.952355  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:34.952367  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:35.018286  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:35.018308  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:35.036568  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:35.036588  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:35.105378  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:35.095119   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.095914   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.097646   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.099888   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:35.100818   13225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:35.105389  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:35.105403  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:35.175887  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:35.175909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:37.712873  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:37.722837  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:37.722915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:37.748671  832221 cri.go:89] found id: ""
	I1208 00:40:37.748684  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.748691  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:37.748697  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:37.748760  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:37.787454  832221 cri.go:89] found id: ""
	I1208 00:40:37.787467  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.787475  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:37.787479  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:37.787540  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:37.827928  832221 cri.go:89] found id: ""
	I1208 00:40:37.827942  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.827949  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:37.827954  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:37.828015  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:37.853248  832221 cri.go:89] found id: ""
	I1208 00:40:37.853261  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.853268  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:37.853274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:37.853333  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:37.881771  832221 cri.go:89] found id: ""
	I1208 00:40:37.881785  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.881792  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:37.881797  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:37.881862  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:37.908845  832221 cri.go:89] found id: ""
	I1208 00:40:37.908858  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.908864  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:37.908870  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:37.908927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:37.933663  832221 cri.go:89] found id: ""
	I1208 00:40:37.933676  832221 logs.go:282] 0 containers: []
	W1208 00:40:37.933684  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:37.933691  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:37.933702  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:37.950237  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:37.950253  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:38.015251  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:38.005364   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.006494   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.007608   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009342   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:38.009909   13327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:38.015261  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:38.015272  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:38.086877  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:38.086899  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:38.120835  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:38.120851  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:40.690876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:40.701698  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:40.701757  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:40.728919  832221 cri.go:89] found id: ""
	I1208 00:40:40.728933  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.728944  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:40.728950  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:40.729006  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:40.756412  832221 cri.go:89] found id: ""
	I1208 00:40:40.756426  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.756433  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:40.756438  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:40.756496  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:40.785209  832221 cri.go:89] found id: ""
	I1208 00:40:40.785223  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.785230  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:40.785235  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:40.785293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:40.812803  832221 cri.go:89] found id: ""
	I1208 00:40:40.812816  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.812823  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:40.812828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:40.812884  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:40.841663  832221 cri.go:89] found id: ""
	I1208 00:40:40.841676  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.841683  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:40.841688  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:40.841745  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:40.867267  832221 cri.go:89] found id: ""
	I1208 00:40:40.867281  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.867298  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:40.867304  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:40.867365  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:40.896639  832221 cri.go:89] found id: ""
	I1208 00:40:40.896652  832221 logs.go:282] 0 containers: []
	W1208 00:40:40.896661  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:40.896668  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:40.896678  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:40.960376  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:40.951828   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.952561   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954235   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.954715   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:40.956258   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:40.960386  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:40.960397  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:41.032818  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:41.032839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.062752  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:41.062771  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:41.130656  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:41.130676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.649290  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:43.659339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:43.659404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:43.685304  832221 cri.go:89] found id: ""
	I1208 00:40:43.685319  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.685326  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:43.685332  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:43.685394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:43.710805  832221 cri.go:89] found id: ""
	I1208 00:40:43.710820  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.710827  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:43.710856  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:43.710933  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:43.735910  832221 cri.go:89] found id: ""
	I1208 00:40:43.735923  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.735930  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:43.735936  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:43.735994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:43.776908  832221 cri.go:89] found id: ""
	I1208 00:40:43.776921  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.776928  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:43.776934  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:43.776997  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:43.809711  832221 cri.go:89] found id: ""
	I1208 00:40:43.809724  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.809731  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:43.809736  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:43.809794  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:43.838996  832221 cri.go:89] found id: ""
	I1208 00:40:43.839009  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.839016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:43.839022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:43.839087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:43.864075  832221 cri.go:89] found id: ""
	I1208 00:40:43.864088  832221 logs.go:282] 0 containers: []
	W1208 00:40:43.864095  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:43.864103  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:43.864120  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:43.930430  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:43.930449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:43.948281  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:43.948301  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:44.016438  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:44.007301   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.008105   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.009920   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.010388   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:44.011991   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:44.016448  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:44.016462  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:44.087788  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:44.087808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.619014  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:46.629647  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:46.629711  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:46.655337  832221 cri.go:89] found id: ""
	I1208 00:40:46.655352  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.655360  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:46.655365  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:46.655426  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:46.685122  832221 cri.go:89] found id: ""
	I1208 00:40:46.685137  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.685145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:46.685150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:46.685218  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:46.711647  832221 cri.go:89] found id: ""
	I1208 00:40:46.711661  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.711669  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:46.711674  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:46.711739  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:46.739056  832221 cri.go:89] found id: ""
	I1208 00:40:46.739070  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.739077  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:46.739082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:46.739138  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:46.777014  832221 cri.go:89] found id: ""
	I1208 00:40:46.777040  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.777047  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:46.777053  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:46.777120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:46.821392  832221 cri.go:89] found id: ""
	I1208 00:40:46.821407  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.821414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:46.821419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:46.821481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:46.847683  832221 cri.go:89] found id: ""
	I1208 00:40:46.847706  832221 logs.go:282] 0 containers: []
	W1208 00:40:46.847714  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:46.847722  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:46.847735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:46.880771  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:46.880787  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:46.946188  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:46.946208  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:46.965130  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:46.965147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:47.035809  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:47.027426   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.028169   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.029695   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.030242   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:47.031860   13662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:47.035820  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:47.035843  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.603876  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:49.614271  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:49.614332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:49.640814  832221 cri.go:89] found id: ""
	I1208 00:40:49.640827  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.640834  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:49.640840  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:49.640898  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:49.670323  832221 cri.go:89] found id: ""
	I1208 00:40:49.670337  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.670345  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:49.670351  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:49.670409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:49.696270  832221 cri.go:89] found id: ""
	I1208 00:40:49.696284  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.696290  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:49.696295  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:49.696353  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:49.725434  832221 cri.go:89] found id: ""
	I1208 00:40:49.725448  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.725454  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:49.725468  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:49.725525  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:49.760362  832221 cri.go:89] found id: ""
	I1208 00:40:49.760375  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.760382  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:49.760393  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:49.760450  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:49.789531  832221 cri.go:89] found id: ""
	I1208 00:40:49.789545  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.789552  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:49.789567  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:49.789637  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:49.818353  832221 cri.go:89] found id: ""
	I1208 00:40:49.818367  832221 logs.go:282] 0 containers: []
	W1208 00:40:49.818374  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:49.818390  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:49.818401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:49.890934  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:49.890956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:49.919198  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:49.919214  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:49.988173  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:49.988194  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:50.007229  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:50.007249  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:50.081725  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:50.072995   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.073702   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.075562   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.076019   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:50.077605   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.581991  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:52.592775  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:52.592847  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:52.619761  832221 cri.go:89] found id: ""
	I1208 00:40:52.619775  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.619782  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:52.619788  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:52.619853  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:52.647647  832221 cri.go:89] found id: ""
	I1208 00:40:52.647662  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.647669  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:52.647674  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:52.647761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:52.673131  832221 cri.go:89] found id: ""
	I1208 00:40:52.673145  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.673152  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:52.673161  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:52.673228  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:52.699525  832221 cri.go:89] found id: ""
	I1208 00:40:52.699540  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.699547  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:52.699553  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:52.699620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:52.725467  832221 cri.go:89] found id: ""
	I1208 00:40:52.725482  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.725489  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:52.725494  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:52.725556  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:52.756767  832221 cri.go:89] found id: ""
	I1208 00:40:52.756782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.756790  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:52.756796  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:52.756855  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:52.787768  832221 cri.go:89] found id: ""
	I1208 00:40:52.787782  832221 logs.go:282] 0 containers: []
	W1208 00:40:52.787790  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:52.787797  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:52.787808  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:52.817811  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:52.817827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:52.889380  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:52.889401  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:52.906939  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:52.906956  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:52.971866  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:52.963137   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.963846   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.965517   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.966128   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:52.967831   13871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:52.971876  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:52.971889  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.544702  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:55.554800  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:55.554875  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:55.581294  832221 cri.go:89] found id: ""
	I1208 00:40:55.581309  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.581316  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:55.581321  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:55.581384  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:55.609189  832221 cri.go:89] found id: ""
	I1208 00:40:55.609210  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.609217  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:55.609222  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:55.609281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:55.636121  832221 cri.go:89] found id: ""
	I1208 00:40:55.636135  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.636142  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:55.636147  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:55.636212  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:55.661670  832221 cri.go:89] found id: ""
	I1208 00:40:55.661684  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.661691  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:55.661697  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:55.661756  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:55.687332  832221 cri.go:89] found id: ""
	I1208 00:40:55.687345  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.687352  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:55.687358  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:55.687416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:55.713054  832221 cri.go:89] found id: ""
	I1208 00:40:55.713069  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.713076  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:55.713082  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:55.713140  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:55.742979  832221 cri.go:89] found id: ""
	I1208 00:40:55.742993  832221 logs.go:282] 0 containers: []
	W1208 00:40:55.743000  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:55.743008  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:55.743019  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:55.761280  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:55.761297  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:55.838925  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:55.830698   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.831571   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833176   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.833798   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:55.835104   13962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:55.838936  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:55.838949  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:55.910195  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:55.910218  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:55.940346  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:55.940364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.509357  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:58.519836  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:58.519901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:58.545859  832221 cri.go:89] found id: ""
	I1208 00:40:58.545874  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.545881  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:58.545887  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:40:58.545948  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:58.575589  832221 cri.go:89] found id: ""
	I1208 00:40:58.575603  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.575609  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:40:58.575614  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:40:58.575672  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:58.604890  832221 cri.go:89] found id: ""
	I1208 00:40:58.604905  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.604911  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:40:58.604917  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:58.604974  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:58.630992  832221 cri.go:89] found id: ""
	I1208 00:40:58.631006  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.631013  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:58.631018  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:58.631075  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:58.656862  832221 cri.go:89] found id: ""
	I1208 00:40:58.656875  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.656882  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:58.656887  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:58.656950  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:58.693729  832221 cri.go:89] found id: ""
	I1208 00:40:58.693744  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.693751  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:58.693756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:58.693815  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:58.719999  832221 cri.go:89] found id: ""
	I1208 00:40:58.720014  832221 logs.go:282] 0 containers: []
	W1208 00:40:58.720021  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:58.720029  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:58.720040  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:58.787457  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:58.787475  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:58.809951  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:58.809970  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:58.877531  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:58.869227   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.870002   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.871542   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.872068   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:58.873583   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:58.877584  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:40:58.877595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:40:58.944804  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:40:58.944823  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:01.474302  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:01.485101  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:01.485163  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:01.512067  832221 cri.go:89] found id: ""
	I1208 00:41:01.512081  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.512094  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:01.512100  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:01.512173  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:01.538625  832221 cri.go:89] found id: ""
	I1208 00:41:01.538639  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.538646  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:01.538651  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:01.538712  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:01.564246  832221 cri.go:89] found id: ""
	I1208 00:41:01.564260  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.564268  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:01.564273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:01.564341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:01.590766  832221 cri.go:89] found id: ""
	I1208 00:41:01.590780  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.590787  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:01.590793  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:01.590880  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:01.618080  832221 cri.go:89] found id: ""
	I1208 00:41:01.618095  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.618102  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:01.618107  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:01.618166  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:01.644849  832221 cri.go:89] found id: ""
	I1208 00:41:01.644864  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.644872  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:01.644878  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:01.644943  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:01.670907  832221 cri.go:89] found id: ""
	I1208 00:41:01.670927  832221 logs.go:282] 0 containers: []
	W1208 00:41:01.670945  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:01.670953  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:01.670972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:01.737140  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:01.737160  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:01.756176  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:01.756199  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:01.837855  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:01.829258   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.830015   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.831708   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.832373   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:01.833946   14177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:01.837866  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:01.837880  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:01.907644  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:01.907665  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:04.439011  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:04.449676  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:04.449738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:04.475094  832221 cri.go:89] found id: ""
	I1208 00:41:04.475107  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.475116  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:04.475122  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:04.475180  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:04.499488  832221 cri.go:89] found id: ""
	I1208 00:41:04.499502  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.499509  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:04.499514  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:04.499574  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:04.524302  832221 cri.go:89] found id: ""
	I1208 00:41:04.524315  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.524322  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:04.524328  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:04.524399  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:04.550178  832221 cri.go:89] found id: ""
	I1208 00:41:04.550192  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.550207  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:04.550214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:04.550290  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:04.579863  832221 cri.go:89] found id: ""
	I1208 00:41:04.579876  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.579883  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:04.579888  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:04.579947  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:04.612186  832221 cri.go:89] found id: ""
	I1208 00:41:04.612200  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.612207  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:04.612212  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:04.612268  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:04.638270  832221 cri.go:89] found id: ""
	I1208 00:41:04.638291  832221 logs.go:282] 0 containers: []
	W1208 00:41:04.638298  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:04.638305  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:04.638316  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:04.704479  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:04.704498  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:04.721141  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:04.721158  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:04.791977  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:04.784021   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.784386   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.785813   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.786384   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:04.787924   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:04.791987  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:04.792009  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:04.869143  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:04.869164  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:07.399175  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:07.409630  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:07.409692  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:07.436029  832221 cri.go:89] found id: ""
	I1208 00:41:07.436051  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.436059  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:07.436065  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:07.436133  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:07.462353  832221 cri.go:89] found id: ""
	I1208 00:41:07.462367  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.462374  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:07.462379  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:07.462438  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:07.488128  832221 cri.go:89] found id: ""
	I1208 00:41:07.488142  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.488149  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:07.488154  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:07.488217  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:07.516680  832221 cri.go:89] found id: ""
	I1208 00:41:07.516694  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.516700  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:07.516705  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:07.516761  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:07.541724  832221 cri.go:89] found id: ""
	I1208 00:41:07.541738  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.541747  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:07.541752  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:07.541809  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:07.566019  832221 cri.go:89] found id: ""
	I1208 00:41:07.566033  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.566049  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:07.566055  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:07.566120  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:07.590763  832221 cri.go:89] found id: ""
	I1208 00:41:07.590786  832221 logs.go:282] 0 containers: []
	W1208 00:41:07.590793  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:07.590800  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:07.590811  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:07.655603  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:07.655627  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:07.672718  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:07.672735  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:07.739768  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:07.731663   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.732102   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.733741   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.734305   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:07.735862   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:07.739777  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:07.739788  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:07.818332  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:07.818351  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:10.352542  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:10.362750  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:10.362807  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:10.387611  832221 cri.go:89] found id: ""
	I1208 00:41:10.387625  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.387631  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:10.387637  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:10.387702  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:10.416324  832221 cri.go:89] found id: ""
	I1208 00:41:10.416338  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.416344  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:10.416349  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:10.416407  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:10.441107  832221 cri.go:89] found id: ""
	I1208 00:41:10.441121  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.441128  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:10.441133  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:10.441199  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:10.469633  832221 cri.go:89] found id: ""
	I1208 00:41:10.469646  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.469659  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:10.469664  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:10.469723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:10.494876  832221 cri.go:89] found id: ""
	I1208 00:41:10.494890  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.494896  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:10.494902  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:10.494960  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:10.531392  832221 cri.go:89] found id: ""
	I1208 00:41:10.531407  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.531414  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:10.531419  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:10.531488  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:10.564042  832221 cri.go:89] found id: ""
	I1208 00:41:10.564056  832221 logs.go:282] 0 containers: []
	W1208 00:41:10.564063  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:10.564072  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:10.564082  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:10.630069  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:10.630089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:10.647244  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:10.647260  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:10.722704  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:10.714334   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.714941   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716459   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.716957   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:10.718378   14485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:10.722715  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:10.722727  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:10.795845  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:10.795865  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.326398  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:13.336729  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:13.336789  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:13.362204  832221 cri.go:89] found id: ""
	I1208 00:41:13.362218  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.362225  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:13.362231  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:13.362288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:13.387741  832221 cri.go:89] found id: ""
	I1208 00:41:13.387755  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.387762  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:13.387767  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:13.387825  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:13.416495  832221 cri.go:89] found id: ""
	I1208 00:41:13.416508  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.416515  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:13.416520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:13.416580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:13.442986  832221 cri.go:89] found id: ""
	I1208 00:41:13.443000  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.443008  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:13.443015  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:13.443074  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:13.468540  832221 cri.go:89] found id: ""
	I1208 00:41:13.468555  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.468562  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:13.468568  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:13.468626  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:13.494472  832221 cri.go:89] found id: ""
	I1208 00:41:13.494487  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.494494  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:13.494500  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:13.494561  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:13.521305  832221 cri.go:89] found id: ""
	I1208 00:41:13.521318  832221 logs.go:282] 0 containers: []
	W1208 00:41:13.521325  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:13.521333  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:13.521347  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:13.553343  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:13.553359  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:13.621324  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:13.621342  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:13.638433  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:13.638450  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:13.707199  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:13.699229   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.699810   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701372   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.701710   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:13.703289   14600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:13.707209  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:13.707232  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.276942  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:16.286989  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:16.287051  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:16.312004  832221 cri.go:89] found id: ""
	I1208 00:41:16.312018  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.312025  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:16.312031  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:16.312090  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:16.336677  832221 cri.go:89] found id: ""
	I1208 00:41:16.336691  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.336698  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:16.336703  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:16.336763  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:16.361556  832221 cri.go:89] found id: ""
	I1208 00:41:16.361579  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.361587  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:16.361592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:16.361661  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:16.386950  832221 cri.go:89] found id: ""
	I1208 00:41:16.386964  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.386971  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:16.386977  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:16.387045  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:16.413845  832221 cri.go:89] found id: ""
	I1208 00:41:16.413867  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.413877  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:16.413883  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:16.413949  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:16.439928  832221 cri.go:89] found id: ""
	I1208 00:41:16.439942  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.439959  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:16.439965  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:16.440030  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:16.466154  832221 cri.go:89] found id: ""
	I1208 00:41:16.466176  832221 logs.go:282] 0 containers: []
	W1208 00:41:16.466183  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:16.466191  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:16.466201  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:16.533106  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:16.533124  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:16.563727  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:16.563742  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:16.633732  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:16.633751  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:16.650899  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:16.650917  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:16.719345  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:16.710576   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.711175   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.712842   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.713540   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:16.715378   14703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.221010  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:19.231342  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:19.231406  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:19.257316  832221 cri.go:89] found id: ""
	I1208 00:41:19.257330  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.257337  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:19.257343  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:19.257401  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:19.283560  832221 cri.go:89] found id: ""
	I1208 00:41:19.283574  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.283581  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:19.283586  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:19.283645  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:19.309316  832221 cri.go:89] found id: ""
	I1208 00:41:19.309332  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.309339  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:19.309344  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:19.309404  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:19.336530  832221 cri.go:89] found id: ""
	I1208 00:41:19.336544  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.336551  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:19.336558  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:19.336617  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:19.362493  832221 cri.go:89] found id: ""
	I1208 00:41:19.362507  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.362515  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:19.362520  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:19.362580  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:19.388582  832221 cri.go:89] found id: ""
	I1208 00:41:19.388602  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.388609  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:19.388614  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:19.388671  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:19.414534  832221 cri.go:89] found id: ""
	I1208 00:41:19.414547  832221 logs.go:282] 0 containers: []
	W1208 00:41:19.414554  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:19.414562  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:19.414573  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:19.478886  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:19.470256   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.470986   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472576   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.472883   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:19.474460   14791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:19.478896  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:19.478908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:19.547311  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:19.547330  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:19.577785  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:19.577801  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:19.643881  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:19.643902  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.161081  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:22.171521  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:22.171585  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:22.198382  832221 cri.go:89] found id: ""
	I1208 00:41:22.198396  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.198413  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:22.198418  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:22.198474  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:22.224532  832221 cri.go:89] found id: ""
	I1208 00:41:22.224547  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.224554  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:22.224560  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:22.224618  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:22.250646  832221 cri.go:89] found id: ""
	I1208 00:41:22.250660  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.250667  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:22.250672  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:22.250738  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:22.276120  832221 cri.go:89] found id: ""
	I1208 00:41:22.276134  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.276141  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:22.276146  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:22.276204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:22.307378  832221 cri.go:89] found id: ""
	I1208 00:41:22.307392  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.307399  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:22.307405  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:22.307481  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:22.332887  832221 cri.go:89] found id: ""
	I1208 00:41:22.332902  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.332909  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:22.332915  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:22.332973  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:22.359765  832221 cri.go:89] found id: ""
	I1208 00:41:22.359790  832221 logs.go:282] 0 containers: []
	W1208 00:41:22.359799  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:22.359806  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:22.359817  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:22.429639  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:22.429667  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:22.446411  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:22.446429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:22.514425  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:22.506102   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.506878   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508409   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.508828   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:22.510405   14901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:22.514437  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:22.514449  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:22.582646  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:22.582668  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.113244  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:25.123522  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:25.123581  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:25.149789  832221 cri.go:89] found id: ""
	I1208 00:41:25.149803  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.149811  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:25.149816  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:25.149877  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:25.175748  832221 cri.go:89] found id: ""
	I1208 00:41:25.175780  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.175787  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:25.175793  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:25.175860  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:25.201633  832221 cri.go:89] found id: ""
	I1208 00:41:25.201647  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.201654  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:25.201660  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:25.201718  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:25.226256  832221 cri.go:89] found id: ""
	I1208 00:41:25.226270  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.226276  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:25.226282  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:25.226340  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:25.251247  832221 cri.go:89] found id: ""
	I1208 00:41:25.251260  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.251267  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:25.251272  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:25.251332  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:25.276489  832221 cri.go:89] found id: ""
	I1208 00:41:25.276502  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.276509  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:25.276514  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:25.276571  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:25.304102  832221 cri.go:89] found id: ""
	I1208 00:41:25.304116  832221 logs.go:282] 0 containers: []
	W1208 00:41:25.304123  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:25.304131  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:25.304141  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:25.334560  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:25.334578  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:25.403772  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:25.403794  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:25.420560  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:25.420577  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:25.482668  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:25.474873   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.475553   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477100   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.477416   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:25.478950   15020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:25.482678  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:25.482689  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.050629  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:28.061960  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:28.062020  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:28.089309  832221 cri.go:89] found id: ""
	I1208 00:41:28.089322  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.089330  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:28.089335  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:28.089394  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:28.114535  832221 cri.go:89] found id: ""
	I1208 00:41:28.114549  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.114556  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:28.114561  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:28.114620  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:28.139191  832221 cri.go:89] found id: ""
	I1208 00:41:28.139205  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.139212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:28.139218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:28.139281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:28.169942  832221 cri.go:89] found id: ""
	I1208 00:41:28.169956  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.169963  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:28.169968  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:28.170026  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:28.194906  832221 cri.go:89] found id: ""
	I1208 00:41:28.194920  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.194927  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:28.194932  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:28.194991  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:28.220745  832221 cri.go:89] found id: ""
	I1208 00:41:28.220759  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.220766  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:28.220772  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:28.220831  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:28.246098  832221 cri.go:89] found id: ""
	I1208 00:41:28.246113  832221 logs.go:282] 0 containers: []
	W1208 00:41:28.246128  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:28.246137  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:28.246147  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:28.311151  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:28.311171  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:28.328051  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:28.328067  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:28.392162  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:28.383698   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.384409   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386106   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.386606   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:28.388119   15118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:28.392172  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:28.392183  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:28.461355  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:28.461376  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:30.991861  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:31.002524  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:31.002603  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:31.053691  832221 cri.go:89] found id: ""
	I1208 00:41:31.053708  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.053715  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:31.053725  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:31.053785  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:31.089132  832221 cri.go:89] found id: ""
	I1208 00:41:31.089146  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.089163  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:31.089169  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:31.089252  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:31.121093  832221 cri.go:89] found id: ""
	I1208 00:41:31.121107  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.121114  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:31.121120  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:31.121193  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:31.148473  832221 cri.go:89] found id: ""
	I1208 00:41:31.148502  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.148510  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:31.148517  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:31.148576  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:31.174204  832221 cri.go:89] found id: ""
	I1208 00:41:31.174218  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.174225  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:31.174231  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:31.174291  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:31.199996  832221 cri.go:89] found id: ""
	I1208 00:41:31.200009  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.200016  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:31.200021  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:31.200079  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:31.224662  832221 cri.go:89] found id: ""
	I1208 00:41:31.224674  832221 logs.go:282] 0 containers: []
	W1208 00:41:31.224681  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:31.224689  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:31.224699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:31.291397  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:31.291417  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:31.308061  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:31.308078  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:31.372069  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:31.363688   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.364492   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366076   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.366554   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:31.368081   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:31.372079  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:31.372089  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:31.443951  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:31.443972  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:33.976603  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:33.987054  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:33.987113  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:34.031182  832221 cri.go:89] found id: ""
	I1208 00:41:34.031197  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.031205  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:34.031211  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:34.031285  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:34.060124  832221 cri.go:89] found id: ""
	I1208 00:41:34.060137  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.060145  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:34.060150  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:34.060207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:34.092539  832221 cri.go:89] found id: ""
	I1208 00:41:34.092553  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.092560  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:34.092565  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:34.092627  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:34.121995  832221 cri.go:89] found id: ""
	I1208 00:41:34.122009  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.122016  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:34.122022  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:34.122077  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:34.150463  832221 cri.go:89] found id: ""
	I1208 00:41:34.150476  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.150483  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:34.150488  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:34.150549  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:34.177998  832221 cri.go:89] found id: ""
	I1208 00:41:34.178021  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.178029  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:34.178034  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:34.178102  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:34.202722  832221 cri.go:89] found id: ""
	I1208 00:41:34.202737  832221 logs.go:282] 0 containers: []
	W1208 00:41:34.202744  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:34.202751  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:34.202761  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:34.267650  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:34.267670  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:34.284346  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:34.284364  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:34.348837  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:34.339259   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.339775   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341532   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.341845   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:34.343351   15324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:34.348848  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:34.348858  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:34.417091  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:34.417112  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:36.948347  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:36.958825  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:36.958908  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:36.984186  832221 cri.go:89] found id: ""
	I1208 00:41:36.984200  832221 logs.go:282] 0 containers: []
	W1208 00:41:36.984207  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:36.984212  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:36.984269  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:37.020431  832221 cri.go:89] found id: ""
	I1208 00:41:37.020446  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.020454  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:37.020460  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:37.020530  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:37.067191  832221 cri.go:89] found id: ""
	I1208 00:41:37.067205  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.067212  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:37.067218  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:37.067294  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:37.094272  832221 cri.go:89] found id: ""
	I1208 00:41:37.094286  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.094293  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:37.094298  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:37.094355  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:37.119686  832221 cri.go:89] found id: ""
	I1208 00:41:37.119709  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.119716  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:37.119722  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:37.119787  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:37.145200  832221 cri.go:89] found id: ""
	I1208 00:41:37.145214  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.145221  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:37.145227  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:37.145288  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:37.171336  832221 cri.go:89] found id: ""
	I1208 00:41:37.171350  832221 logs.go:282] 0 containers: []
	W1208 00:41:37.171357  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:37.171364  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:37.171375  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:37.237645  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:37.237664  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:37.254543  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:37.254560  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:37.322370  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:37.313914   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.314565   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316282   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.316842   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:37.318568   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:37.322380  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:37.322392  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:37.391923  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:37.391943  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:39.926099  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:39.936345  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:39.936412  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:39.962579  832221 cri.go:89] found id: ""
	I1208 00:41:39.962593  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.962600  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:39.962605  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:39.962669  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:39.989842  832221 cri.go:89] found id: ""
	I1208 00:41:39.989856  832221 logs.go:282] 0 containers: []
	W1208 00:41:39.989863  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:39.989868  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:39.989926  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:40.044295  832221 cri.go:89] found id: ""
	I1208 00:41:40.044310  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.044325  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:40.044339  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:40.044416  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:40.079243  832221 cri.go:89] found id: ""
	I1208 00:41:40.079258  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.079266  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:40.079273  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:40.079349  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:40.112934  832221 cri.go:89] found id: ""
	I1208 00:41:40.112948  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.112956  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:40.112961  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:40.113039  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:40.143499  832221 cri.go:89] found id: ""
	I1208 00:41:40.143513  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.143521  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:40.143526  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:40.143587  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:40.169504  832221 cri.go:89] found id: ""
	I1208 00:41:40.169519  832221 logs.go:282] 0 containers: []
	W1208 00:41:40.169526  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:40.169533  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:40.169544  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:40.235615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:40.235638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:40.252840  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:40.252857  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:40.321804  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:40.313121   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.313979   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.315716   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.316388   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:40.317984   15534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:40.321814  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:40.321827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:40.390368  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:40.390389  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:42.923500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:42.933619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:42.933678  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:42.959506  832221 cri.go:89] found id: ""
	I1208 00:41:42.959520  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.959527  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:42.959533  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:42.959596  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:42.984924  832221 cri.go:89] found id: ""
	I1208 00:41:42.984937  832221 logs.go:282] 0 containers: []
	W1208 00:41:42.984946  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:42.984951  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:42.985013  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:43.023875  832221 cri.go:89] found id: ""
	I1208 00:41:43.023889  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.023896  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:43.023903  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:43.023962  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:43.053076  832221 cri.go:89] found id: ""
	I1208 00:41:43.053090  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.053097  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:43.053102  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:43.053185  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:43.084087  832221 cri.go:89] found id: ""
	I1208 00:41:43.084101  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.084108  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:43.084113  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:43.084174  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:43.109712  832221 cri.go:89] found id: ""
	I1208 00:41:43.109737  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.109746  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:43.109751  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:43.109817  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:43.134863  832221 cri.go:89] found id: ""
	I1208 00:41:43.134877  832221 logs.go:282] 0 containers: []
	W1208 00:41:43.134886  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:43.134894  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:43.134908  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:43.201957  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:43.193963   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.194498   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196024   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.196494   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:43.197967   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:43.201967  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:43.201982  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:43.273086  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:43.273107  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:43.305154  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:43.305177  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:43.373686  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:43.373708  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:45.892403  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:45.902913  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:45.902990  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:45.927841  832221 cri.go:89] found id: ""
	I1208 00:41:45.927855  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.927862  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:45.927868  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:45.927927  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:45.952154  832221 cri.go:89] found id: ""
	I1208 00:41:45.952167  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.952174  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:45.952179  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:45.952236  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:45.979675  832221 cri.go:89] found id: ""
	I1208 00:41:45.979688  832221 logs.go:282] 0 containers: []
	W1208 00:41:45.979696  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:45.979700  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:45.979755  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:46.013259  832221 cri.go:89] found id: ""
	I1208 00:41:46.013273  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.013280  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:46.013285  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:46.013351  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:46.042352  832221 cri.go:89] found id: ""
	I1208 00:41:46.042366  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.042372  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:46.042377  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:46.042440  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:46.070733  832221 cri.go:89] found id: ""
	I1208 00:41:46.070746  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.070753  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:46.070763  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:46.070823  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:46.098473  832221 cri.go:89] found id: ""
	I1208 00:41:46.098487  832221 logs.go:282] 0 containers: []
	W1208 00:41:46.098494  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:46.098502  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:46.098512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:46.125193  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:46.125209  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:46.193253  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:46.193274  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:46.210082  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:46.210099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:46.276709  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:46.268033   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.268871   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.270582   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.271243   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:46.272912   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:46.276719  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:46.276730  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:48.845307  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:48.856005  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:48.856069  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:48.880627  832221 cri.go:89] found id: ""
	I1208 00:41:48.880643  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.880650  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:48.880655  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:48.880723  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:48.910676  832221 cri.go:89] found id: ""
	I1208 00:41:48.910691  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.910699  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:48.910704  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:48.910765  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:48.937001  832221 cri.go:89] found id: ""
	I1208 00:41:48.937015  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.937022  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:48.937027  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:48.937087  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:48.961464  832221 cri.go:89] found id: ""
	I1208 00:41:48.961478  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.961484  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:48.961489  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:48.961546  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:48.985593  832221 cri.go:89] found id: ""
	I1208 00:41:48.985607  832221 logs.go:282] 0 containers: []
	W1208 00:41:48.985614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:48.985618  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:48.985673  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:49.021903  832221 cri.go:89] found id: ""
	I1208 00:41:49.021917  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.021924  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:49.021929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:49.021987  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:49.051822  832221 cri.go:89] found id: ""
	I1208 00:41:49.051835  832221 logs.go:282] 0 containers: []
	W1208 00:41:49.051842  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:49.051850  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:49.051860  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:49.119331  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:49.119350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:49.136412  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:49.136429  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:49.209120  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:49.200755   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.201571   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203264   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.203743   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:49.205269   15852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:49.209130  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:49.209142  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:49.281668  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:49.281696  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:51.816189  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:51.826432  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:51.826508  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:51.852549  832221 cri.go:89] found id: ""
	I1208 00:41:51.852563  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.852570  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:51.852575  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:51.852639  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:51.882102  832221 cri.go:89] found id: ""
	I1208 00:41:51.882115  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.882123  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:51.882128  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:51.882183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:51.908918  832221 cri.go:89] found id: ""
	I1208 00:41:51.908931  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.908938  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:51.908943  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:51.908999  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:51.933704  832221 cri.go:89] found id: ""
	I1208 00:41:51.933718  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.933725  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:51.933731  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:51.933786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:51.959460  832221 cri.go:89] found id: ""
	I1208 00:41:51.959474  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.959480  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:51.959485  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:51.959543  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:51.985138  832221 cri.go:89] found id: ""
	I1208 00:41:51.985151  832221 logs.go:282] 0 containers: []
	W1208 00:41:51.985158  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:51.985170  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:51.985229  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:52.017078  832221 cri.go:89] found id: ""
	I1208 00:41:52.017092  832221 logs.go:282] 0 containers: []
	W1208 00:41:52.017100  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:52.017108  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:52.017118  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:52.061579  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:52.061595  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:52.130427  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:52.130446  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:52.146893  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:52.146909  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:52.216088  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:52.207898   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.208309   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.209867   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.210174   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:52.211567   15969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:52.216098  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:52.216109  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:54.782500  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:54.793061  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:54.793123  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:54.818661  832221 cri.go:89] found id: ""
	I1208 00:41:54.818675  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.818682  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:54.818688  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:54.818747  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:54.843336  832221 cri.go:89] found id: ""
	I1208 00:41:54.843351  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.843358  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:54.843363  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:54.843423  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:54.873031  832221 cri.go:89] found id: ""
	I1208 00:41:54.873045  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.873052  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:54.873057  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:54.873114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:54.904194  832221 cri.go:89] found id: ""
	I1208 00:41:54.904208  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.904215  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:54.904221  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:54.904281  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:54.928355  832221 cri.go:89] found id: ""
	I1208 00:41:54.928370  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.928377  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:54.928382  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:54.928441  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:54.954187  832221 cri.go:89] found id: ""
	I1208 00:41:54.954201  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.954208  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:54.954214  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:54.954277  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:54.979288  832221 cri.go:89] found id: ""
	I1208 00:41:54.979301  832221 logs.go:282] 0 containers: []
	W1208 00:41:54.979308  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:54.979316  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:54.979329  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:55.047402  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:55.047422  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:55.065193  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:55.065210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:55.134035  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:55.125723   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.126428   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128028   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.128732   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:55.130297   16064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:55.134045  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:55.134056  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:55.202635  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:55.202656  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:57.732860  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:57.743009  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:57.743070  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:57.769255  832221 cri.go:89] found id: ""
	I1208 00:41:57.769270  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.769277  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:57.769282  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:41:57.769341  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:57.796071  832221 cri.go:89] found id: ""
	I1208 00:41:57.796084  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.796092  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:41:57.796097  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:41:57.796152  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:57.821305  832221 cri.go:89] found id: ""
	I1208 00:41:57.821319  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.821326  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:41:57.821331  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:57.821389  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:57.850632  832221 cri.go:89] found id: ""
	I1208 00:41:57.850646  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.850653  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:57.850658  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:57.850715  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:57.874739  832221 cri.go:89] found id: ""
	I1208 00:41:57.874753  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.874760  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:57.874766  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:57.874829  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:57.898660  832221 cri.go:89] found id: ""
	I1208 00:41:57.898674  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.898681  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:57.898687  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:57.898744  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:57.924451  832221 cri.go:89] found id: ""
	I1208 00:41:57.924465  832221 logs.go:282] 0 containers: []
	W1208 00:41:57.924472  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:57.924480  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:57.924490  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:57.990717  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:57.990739  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:58.009617  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:58.009637  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:58.089328  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:58.080773   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.081467   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083224   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.083595   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:58.084901   16169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:58.089339  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:41:58.089350  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:41:58.158129  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:41:58.158149  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:00.692822  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:00.703351  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:00.703413  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:00.730817  832221 cri.go:89] found id: ""
	I1208 00:42:00.730831  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.730838  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:00.730864  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:00.730925  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:00.757577  832221 cri.go:89] found id: ""
	I1208 00:42:00.757591  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.757599  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:00.757604  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:00.757668  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:00.784124  832221 cri.go:89] found id: ""
	I1208 00:42:00.784140  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.784147  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:00.784153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:00.784213  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:00.811121  832221 cri.go:89] found id: ""
	I1208 00:42:00.811136  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.811143  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:00.811149  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:00.811207  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:00.838124  832221 cri.go:89] found id: ""
	I1208 00:42:00.838139  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.838147  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:00.838153  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:00.838216  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:00.864699  832221 cri.go:89] found id: ""
	I1208 00:42:00.864713  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.864720  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:00.864726  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:00.864786  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:00.890750  832221 cri.go:89] found id: ""
	I1208 00:42:00.890772  832221 logs.go:282] 0 containers: []
	W1208 00:42:00.890780  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:00.890788  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:00.890799  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:00.956810  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:00.956830  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:00.973943  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:00.973959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:01.050555  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:01.039526   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.040312   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.042428   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.043230   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:01.045174   16269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:01.050566  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:01.050579  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:01.129234  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:01.129257  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:03.659413  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:03.669877  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:03.669937  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:03.696297  832221 cri.go:89] found id: ""
	I1208 00:42:03.696316  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.696324  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:03.696329  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:03.696388  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:03.722691  832221 cri.go:89] found id: ""
	I1208 00:42:03.722706  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.722713  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:03.722718  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:03.722777  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:03.749319  832221 cri.go:89] found id: ""
	I1208 00:42:03.749336  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.749343  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:03.749348  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:03.749409  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:03.778235  832221 cri.go:89] found id: ""
	I1208 00:42:03.778250  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.778257  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:03.778262  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:03.778323  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:03.805566  832221 cri.go:89] found id: ""
	I1208 00:42:03.805579  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.805586  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:03.805592  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:03.805656  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:03.835418  832221 cri.go:89] found id: ""
	I1208 00:42:03.835434  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.835441  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:03.835447  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:03.835507  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:03.862034  832221 cri.go:89] found id: ""
	I1208 00:42:03.862048  832221 logs.go:282] 0 containers: []
	W1208 00:42:03.862056  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:03.862063  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:03.862074  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:03.926004  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:03.917609   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.918180   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.919729   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.920201   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:03.921670   16368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:03.926014  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:03.926025  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:03.994473  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:03.994491  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:04.028498  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:04.028530  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:04.103887  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:04.103913  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:06.621744  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:06.631952  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:06.632014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:06.656834  832221 cri.go:89] found id: ""
	I1208 00:42:06.656847  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.656855  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:06.656859  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:06.656915  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:06.681945  832221 cri.go:89] found id: ""
	I1208 00:42:06.681960  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.681967  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:06.681972  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:06.682029  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:06.710714  832221 cri.go:89] found id: ""
	I1208 00:42:06.710728  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.710735  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:06.710741  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:06.710798  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:06.737689  832221 cri.go:89] found id: ""
	I1208 00:42:06.737703  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.737710  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:06.737716  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:06.737773  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:06.763380  832221 cri.go:89] found id: ""
	I1208 00:42:06.763394  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.763401  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:06.763406  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:06.763468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:06.788657  832221 cri.go:89] found id: ""
	I1208 00:42:06.788672  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.788679  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:06.788684  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:06.788743  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:06.814619  832221 cri.go:89] found id: ""
	I1208 00:42:06.814633  832221 logs.go:282] 0 containers: []
	W1208 00:42:06.814641  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:06.814648  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:06.814659  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:06.876947  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:06.868940   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.869712   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871283   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.871608   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:06.873121   16475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:06.876957  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:06.876967  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:06.945083  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:06.945103  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:06.975476  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:06.975492  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:07.049079  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:07.049111  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.568507  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:09.578816  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:09.578896  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:09.604243  832221 cri.go:89] found id: ""
	I1208 00:42:09.604264  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.604271  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:09.604276  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:09.604335  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:09.629065  832221 cri.go:89] found id: ""
	I1208 00:42:09.629079  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.629086  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:09.629091  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:09.629187  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:09.657275  832221 cri.go:89] found id: ""
	I1208 00:42:09.657288  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.657295  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:09.657300  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:09.657356  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:09.683416  832221 cri.go:89] found id: ""
	I1208 00:42:09.683431  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.683438  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:09.683443  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:09.683500  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:09.709238  832221 cri.go:89] found id: ""
	I1208 00:42:09.709261  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.709269  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:09.709274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:09.709339  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:09.734114  832221 cri.go:89] found id: ""
	I1208 00:42:09.734128  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.734134  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:09.734152  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:09.734209  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:09.759311  832221 cri.go:89] found id: ""
	I1208 00:42:09.759325  832221 logs.go:282] 0 containers: []
	W1208 00:42:09.759331  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:09.759339  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:09.759349  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:09.824496  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:09.824516  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:09.841803  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:09.841820  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:09.904180  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:09.896672   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.897046   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898489   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.898785   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:09.900277   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:09.904190  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:09.904207  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:09.971074  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:09.971095  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:12.508051  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:12.518216  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:12.518274  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:12.544077  832221 cri.go:89] found id: ""
	I1208 00:42:12.544098  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.544105  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:12.544121  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:12.544183  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:12.573722  832221 cri.go:89] found id: ""
	I1208 00:42:12.573737  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.573744  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:12.573749  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:12.573814  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:12.605486  832221 cri.go:89] found id: ""
	I1208 00:42:12.605500  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.605508  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:12.605513  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:12.605573  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:12.630248  832221 cri.go:89] found id: ""
	I1208 00:42:12.630262  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.630269  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:12.630274  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:12.630334  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:12.657639  832221 cri.go:89] found id: ""
	I1208 00:42:12.657653  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.657660  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:12.657665  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:12.657729  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:12.687466  832221 cri.go:89] found id: ""
	I1208 00:42:12.687488  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.687495  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:12.687501  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:12.687560  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:12.712697  832221 cri.go:89] found id: ""
	I1208 00:42:12.712713  832221 logs.go:282] 0 containers: []
	W1208 00:42:12.712720  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:12.712729  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:12.712740  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:12.782236  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:12.782256  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:12.798869  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:12.798890  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:12.869748  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:12.861203   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862047   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.862926   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864396   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:12.864821   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:12.869759  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:12.869772  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:12.940819  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:12.940839  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:15.471472  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:15.481993  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:15.482061  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:15.508029  832221 cri.go:89] found id: ""
	I1208 00:42:15.508043  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.508050  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:15.508055  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:15.508114  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:15.533198  832221 cri.go:89] found id: ""
	I1208 00:42:15.533212  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.533219  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:15.533224  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:15.533293  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:15.559200  832221 cri.go:89] found id: ""
	I1208 00:42:15.559215  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.559222  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:15.559230  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:15.559292  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:15.586368  832221 cri.go:89] found id: ""
	I1208 00:42:15.586382  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.586389  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:15.586394  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:15.586463  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:15.613829  832221 cri.go:89] found id: ""
	I1208 00:42:15.613862  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.613870  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:15.613875  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:15.613939  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:15.638601  832221 cri.go:89] found id: ""
	I1208 00:42:15.638616  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.638623  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:15.638629  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:15.638687  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:15.663577  832221 cri.go:89] found id: ""
	I1208 00:42:15.663592  832221 logs.go:282] 0 containers: []
	W1208 00:42:15.663599  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:15.663606  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:15.663617  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:15.729315  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:15.729346  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:15.746062  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:15.746081  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:15.817222  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:15.808780   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.809460   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.810376   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.811843   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:15.812281   16793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:15.817234  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:15.817246  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:15.884896  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:15.884916  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.414159  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:18.424398  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:18.424464  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:18.454155  832221 cri.go:89] found id: ""
	I1208 00:42:18.454169  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.454177  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:18.454183  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:18.454245  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:18.479882  832221 cri.go:89] found id: ""
	I1208 00:42:18.479896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.479904  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:18.479909  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:18.479969  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:18.505299  832221 cri.go:89] found id: ""
	I1208 00:42:18.505313  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.505320  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:18.505325  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:18.505383  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:18.532868  832221 cri.go:89] found id: ""
	I1208 00:42:18.532881  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.532889  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:18.532894  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:18.532954  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:18.561651  832221 cri.go:89] found id: ""
	I1208 00:42:18.561664  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.561671  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:18.561677  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:18.561735  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:18.589482  832221 cri.go:89] found id: ""
	I1208 00:42:18.589496  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.589503  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:18.589509  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:18.589566  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:18.613882  832221 cri.go:89] found id: ""
	I1208 00:42:18.613896  832221 logs.go:282] 0 containers: []
	W1208 00:42:18.613904  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:18.613911  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:18.613922  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:18.641758  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:18.641774  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:18.717185  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:18.717210  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:18.734137  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:18.734155  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:18.802653  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:18.794373   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.795187   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.796738   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.797066   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:18.798566   16909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:18.802664  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:18.802676  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.371665  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:21.383636  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:21.383698  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:21.408072  832221 cri.go:89] found id: ""
	I1208 00:42:21.408086  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.408093  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:21.408098  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:21.408155  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:21.432924  832221 cri.go:89] found id: ""
	I1208 00:42:21.432948  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.432955  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:21.432961  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:21.433025  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:21.457883  832221 cri.go:89] found id: ""
	I1208 00:42:21.457897  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.457904  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:21.457909  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:21.457967  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:21.483388  832221 cri.go:89] found id: ""
	I1208 00:42:21.483402  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.483410  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:21.483415  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:21.483475  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:21.509434  832221 cri.go:89] found id: ""
	I1208 00:42:21.509448  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.509456  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:21.509461  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:21.509519  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:21.534437  832221 cri.go:89] found id: ""
	I1208 00:42:21.534451  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.534458  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:21.534464  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:21.534521  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:21.559919  832221 cri.go:89] found id: ""
	I1208 00:42:21.559932  832221 logs.go:282] 0 containers: []
	W1208 00:42:21.559939  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:21.559949  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:21.559959  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:21.625640  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:21.625661  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:21.645629  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:21.645648  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:21.714153  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:21.705321   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.705810   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707534   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.707887   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:21.710122   16999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:21.714163  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:21.714173  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:21.781175  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:21.781196  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:24.310973  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:24.321986  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:24.322048  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:24.348885  832221 cri.go:89] found id: ""
	I1208 00:42:24.348899  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.348906  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:24.348912  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:24.348972  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:24.378380  832221 cri.go:89] found id: ""
	I1208 00:42:24.378394  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.378401  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:24.378407  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:24.378468  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:24.403905  832221 cri.go:89] found id: ""
	I1208 00:42:24.403922  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.403933  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:24.403938  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:24.404014  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:24.433947  832221 cri.go:89] found id: ""
	I1208 00:42:24.433961  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.433969  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:24.433975  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:24.434037  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:24.459342  832221 cri.go:89] found id: ""
	I1208 00:42:24.459356  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.459363  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:24.459368  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:24.459429  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:24.484750  832221 cri.go:89] found id: ""
	I1208 00:42:24.484764  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.484771  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:24.484777  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:24.484832  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:24.514464  832221 cri.go:89] found id: ""
	I1208 00:42:24.514478  832221 logs.go:282] 0 containers: []
	W1208 00:42:24.514493  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:24.514501  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:24.514512  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:24.580016  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:24.580037  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:24.598055  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:24.598071  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:24.664079  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:24.655587   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.656522   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658051   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.658377   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:24.659893   17101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:24.664089  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:24.664099  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:24.733616  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:24.733639  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:27.263764  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:27.274828  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:27.274913  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:27.305226  832221 cri.go:89] found id: ""
	I1208 00:42:27.305241  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.305248  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:27.305253  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:27.305312  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:27.330800  832221 cri.go:89] found id: ""
	I1208 00:42:27.330815  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.330822  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:27.330827  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:27.330914  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:27.357232  832221 cri.go:89] found id: ""
	I1208 00:42:27.357246  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.357253  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:27.357258  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:27.357314  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:27.385173  832221 cri.go:89] found id: ""
	I1208 00:42:27.385186  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.385193  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:27.385199  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:27.385264  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:27.415410  832221 cri.go:89] found id: ""
	I1208 00:42:27.415423  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.415430  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:27.415435  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:27.415491  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:27.441114  832221 cri.go:89] found id: ""
	I1208 00:42:27.441128  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.441135  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:27.441140  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:27.441204  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:27.468819  832221 cri.go:89] found id: ""
	I1208 00:42:27.468833  832221 logs.go:282] 0 containers: []
	W1208 00:42:27.468841  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:27.468849  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:27.468859  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:27.534615  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:27.534638  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:27.552028  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:27.552044  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:27.617298  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:27.609689   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.610185   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.611684   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.612110   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:27.613566   17205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:27.617308  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:27.617318  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:27.685006  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:27.685026  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.213024  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:30.223536  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:42:30.223597  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:42:30.252285  832221 cri.go:89] found id: ""
	I1208 00:42:30.252299  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.252306  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:42:30.252311  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:42:30.252378  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:42:30.283908  832221 cri.go:89] found id: ""
	I1208 00:42:30.283922  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.283931  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:42:30.283936  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:42:30.283994  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:42:30.318884  832221 cri.go:89] found id: ""
	I1208 00:42:30.318899  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.318906  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:42:30.318912  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:42:30.318968  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:42:30.349060  832221 cri.go:89] found id: ""
	I1208 00:42:30.349075  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.349082  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:42:30.349088  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:42:30.349164  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:42:30.376813  832221 cri.go:89] found id: ""
	I1208 00:42:30.376829  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.376837  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:42:30.376842  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:42:30.376901  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:42:30.404729  832221 cri.go:89] found id: ""
	I1208 00:42:30.404744  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.404750  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:42:30.404756  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:42:30.404819  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:42:30.431212  832221 cri.go:89] found id: ""
	I1208 00:42:30.431226  832221 logs.go:282] 0 containers: []
	W1208 00:42:30.431233  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:42:30.431241  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:42:30.431251  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:42:30.498900  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:42:30.490024   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.490682   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.492420   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.493158   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:42:30.494769   17302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:42:30.498911  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:42:30.498921  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:42:30.567676  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:42:30.567699  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:42:30.596733  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:42:30.596749  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:42:30.662190  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:42:30.662211  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:42:33.179806  832221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:42:33.190715  832221 kubeadm.go:602] duration metric: took 4m2.701897978s to restartPrimaryControlPlane
	W1208 00:42:33.190784  832221 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:42:33.190886  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:42:33.600155  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:42:33.612954  832221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:42:33.620726  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:42:33.620779  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:42:33.628462  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:42:33.628471  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:42:33.628522  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:42:33.636365  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:42:33.636420  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:42:33.643722  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:42:33.651305  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:42:33.651360  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:42:33.658707  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.666176  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:42:33.666232  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:42:33.673523  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:42:33.681031  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:42:33.681086  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:42:33.688609  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:42:33.724887  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:42:33.724941  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:42:33.797997  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:42:33.798062  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:42:33.798096  832221 kubeadm.go:319] OS: Linux
	I1208 00:42:33.798139  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:42:33.798186  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:42:33.798232  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:42:33.798279  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:42:33.798325  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:42:33.798372  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:42:33.798416  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:42:33.798462  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:42:33.798507  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:42:33.859952  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:42:33.860071  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:42:33.860170  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:42:33.868067  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:42:33.869917  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:42:33.869999  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:42:33.870063  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:42:33.870137  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:42:33.870197  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:42:33.870265  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:42:33.870368  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:42:33.870448  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:42:33.870928  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:42:33.871217  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:42:33.871538  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:42:33.871740  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:42:33.871797  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:42:34.028121  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:42:34.367427  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:42:34.702083  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:42:35.025762  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:42:35.511131  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:42:35.511826  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:42:35.514836  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:42:35.516409  832221 out.go:252]   - Booting up control plane ...
	I1208 00:42:35.516507  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:42:35.516848  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:42:35.519384  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:42:35.533955  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:42:35.534084  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:42:35.541753  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:42:35.542016  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:42:35.542213  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:42:35.674531  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:42:35.674638  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:46:35.675373  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115059s
	I1208 00:46:35.675397  832221 kubeadm.go:319] 
	I1208 00:46:35.675450  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:46:35.675480  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:46:35.675578  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:46:35.675582  832221 kubeadm.go:319] 
	I1208 00:46:35.675680  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:46:35.675709  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:46:35.675738  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:46:35.675741  832221 kubeadm.go:319] 
	I1208 00:46:35.680376  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:46:35.680807  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:46:35.680915  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:46:35.681162  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:46:35.681167  832221 kubeadm.go:319] 
	I1208 00:46:35.681238  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:46:35.681347  832221 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115059s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:46:35.681436  832221 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 00:46:36.099633  832221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:46:36.112518  832221 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:46:36.112573  832221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:46:36.120714  832221 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:46:36.120723  832221 kubeadm.go:158] found existing configuration files:
	
	I1208 00:46:36.120772  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:46:36.128165  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:46:36.128218  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:46:36.135603  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:46:36.142958  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:46:36.143011  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:46:36.150557  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.158107  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:46:36.158166  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:46:36.165315  832221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:46:36.172678  832221 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:46:36.172733  832221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:46:36.179983  832221 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:46:36.221281  832221 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:46:36.221576  832221 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:46:36.304904  832221 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:46:36.304971  832221 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:46:36.305006  832221 kubeadm.go:319] OS: Linux
	I1208 00:46:36.305062  832221 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:46:36.305109  832221 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:46:36.305154  832221 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:46:36.305201  832221 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:46:36.305247  832221 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:46:36.305299  832221 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:46:36.305343  832221 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:46:36.305391  832221 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:46:36.305437  832221 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:46:36.375885  832221 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:46:36.375986  832221 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:46:36.376075  832221 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:46:36.387291  832221 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:46:36.389104  832221 out.go:252]   - Generating certificates and keys ...
	I1208 00:46:36.389182  832221 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:46:36.389272  832221 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:46:36.389371  832221 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:46:36.389436  832221 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:46:36.389506  832221 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:46:36.389559  832221 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:46:36.389626  832221 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:46:36.389691  832221 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:46:36.389770  832221 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:46:36.389858  832221 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:46:36.389893  832221 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:46:36.389946  832221 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:46:37.029886  832221 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:46:37.175943  832221 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:46:37.229666  832221 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:46:37.386162  832221 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:46:37.721262  832221 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:46:37.722365  832221 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:46:37.726361  832221 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:46:37.727820  832221 out.go:252]   - Booting up control plane ...
	I1208 00:46:37.727919  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:46:37.727991  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:46:37.728873  832221 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:46:37.743822  832221 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:46:37.744021  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:46:37.751812  832221 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:46:37.751899  832221 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:46:37.751935  832221 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:46:37.878966  832221 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:46:37.879079  832221 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:50:37.879778  832221 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187421s
	I1208 00:50:37.879803  832221 kubeadm.go:319] 
	I1208 00:50:37.879860  832221 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:50:37.879893  832221 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:50:37.879997  832221 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:50:37.880002  832221 kubeadm.go:319] 
	I1208 00:50:37.880106  832221 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:50:37.880137  832221 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:50:37.880167  832221 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:50:37.880170  832221 kubeadm.go:319] 
	I1208 00:50:37.885162  832221 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:50:37.885617  832221 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:50:37.885748  832221 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:50:37.886002  832221 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:50:37.886010  832221 kubeadm.go:319] 
	I1208 00:50:37.886091  832221 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:50:37.886152  832221 kubeadm.go:403] duration metric: took 12m7.43140026s to StartCluster
	I1208 00:50:37.886198  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:50:37.886263  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:50:37.913929  832221 cri.go:89] found id: ""
	I1208 00:50:37.913943  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.913950  832221 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:50:37.913956  832221 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 00:50:37.914018  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:50:37.940084  832221 cri.go:89] found id: ""
	I1208 00:50:37.940099  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.940106  832221 logs.go:284] No container was found matching "etcd"
	I1208 00:50:37.940111  832221 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 00:50:37.940168  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:50:37.965369  832221 cri.go:89] found id: ""
	I1208 00:50:37.965385  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.965392  832221 logs.go:284] No container was found matching "coredns"
	I1208 00:50:37.965397  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:50:37.965454  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:50:37.991902  832221 cri.go:89] found id: ""
	I1208 00:50:37.991916  832221 logs.go:282] 0 containers: []
	W1208 00:50:37.991923  832221 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:50:37.991929  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:50:37.991989  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:50:38.041593  832221 cri.go:89] found id: ""
	I1208 00:50:38.041607  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.041614  832221 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:50:38.041619  832221 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:50:38.041681  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:50:38.082440  832221 cri.go:89] found id: ""
	I1208 00:50:38.082454  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.082461  832221 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:50:38.082467  832221 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 00:50:38.082527  832221 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:50:38.108776  832221 cri.go:89] found id: ""
	I1208 00:50:38.108794  832221 logs.go:282] 0 containers: []
	W1208 00:50:38.108804  832221 logs.go:284] No container was found matching "kindnet"
	I1208 00:50:38.108813  832221 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:50:38.108827  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:50:38.179358  832221 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:50:38.170980   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.171693   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173350   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.173810   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:50:38.175281   21093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:50:38.179368  832221 logs.go:123] Gathering logs for CRI-O ...
	I1208 00:50:38.179379  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 00:50:38.249264  832221 logs.go:123] Gathering logs for container status ...
	I1208 00:50:38.249284  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:50:38.283297  832221 logs.go:123] Gathering logs for kubelet ...
	I1208 00:50:38.283313  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:50:38.352336  832221 logs.go:123] Gathering logs for dmesg ...
	I1208 00:50:38.352356  832221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 00:50:38.370094  832221 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:50:38.370135  832221 out.go:285] * 
	W1208 00:50:38.370244  832221 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.370347  832221 out.go:285] * 
	W1208 00:50:38.372671  832221 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:50:38.375987  832221 out.go:203] 
	W1208 00:50:38.377331  832221 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187421s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:50:38.377432  832221 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:50:38.377486  832221 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:50:38.378650  832221 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976141949Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976389032Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976505948Z" level=info msg="Create NRI interface"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976728531Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976803559Z" level=info msg="runtime interface created"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976871433Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976925095Z" level=info msg="runtime interface starting up..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.976975737Z" level=info msg="starting plugins..."
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.977043373Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:38:28 functional-525396 crio[9946]: time="2025-12-08T00:38:28.97717112Z" level=info msg="No systemd watchdog enabled"
	Dec 08 00:38:28 functional-525396 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.863535575Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=86c63571-1518-417d-8c36-88972a10f046 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864340284Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd30f3d8-2e57-4e42-9d38-12f0c72774a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.864886538Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2294e0c2-3c35-4ad2-b70e-1cf27e140e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865379712Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=8bd0e2b4-0a84-462b-a4c0-b4ef6c82ea6b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.865907537Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6aa3aa31-43f2-49f4-affe-a3c22725ca07 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.86644149Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ab7db80c-c2d4-4d6c-acf1-db4a7ce32608 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:42:33 functional-525396 crio[9946]: time="2025-12-08T00:42:33.867005106Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fe935a58-ea6c-4485-86ff-51db887cec2b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:52:30.376875   22508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:30.377296   22508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:30.378828   22508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:30.379567   22508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:30.381257   22508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:52:30 up  5:34,  0 user,  load average: 0.24, 0.24, 0.42
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:52:27 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:28 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1107.
	Dec 08 00:52:28 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:28 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:28 functional-525396 kubelet[22395]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:28 functional-525396 kubelet[22395]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:28 functional-525396 kubelet[22395]: E1208 00:52:28.310555   22395 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:28 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:28 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1108.
	Dec 08 00:52:29 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:29 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:29 functional-525396 kubelet[22401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:29 functional-525396 kubelet[22401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:29 functional-525396 kubelet[22401]: E1208 00:52:29.075662   22401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1109.
	Dec 08 00:52:29 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:29 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:29 functional-525396 kubelet[22422]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:29 functional-525396 kubelet[22422]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:29 functional-525396 kubelet[22422]: E1208 00:52:29.824030   22422 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:29 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (385.583776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:50:55.597691  791807 retry.go:31] will retry after 4.016056174s: Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:51:09.615230  791807 retry.go:31] will retry after 2.521938436s: Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:51:22.138096  791807 retry.go:31] will retry after 7.144777002s: Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:51:39.284002  791807 retry.go:31] will retry after 7.512719821s: Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:51:56.797752  791807 retry.go:31] will retry after 21.805772909s: Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1208 00:52:46.335551  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1208 00:53:37.455259  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (330.601019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (300.030716ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-525396 image ls                                                                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image          │ functional-525396 image save --daemon kicbase/echo-server:functional-525396 --alsologtostderr                                                              │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /etc/ssl/certs/791807.pem                                                                                                   │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /usr/share/ca-certificates/791807.pem                                                                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                   │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /etc/ssl/certs/7918072.pem                                                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /usr/share/ca-certificates/7918072.pem                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                   │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh sudo cat /etc/test/nested/copy/791807/hosts                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ cp             │ functional-525396 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                         │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh -n functional-525396 sudo cat /home/docker/cp-test.txt                                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ cp             │ functional-525396 cp functional-525396:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp59427023/001/cp-test.txt │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh -n functional-525396 sudo cat /home/docker/cp-test.txt                                                                               │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ cp             │ functional-525396 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh -n functional-525396 sudo cat /tmp/does/not/exist/cp-test.txt                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image          │ functional-525396 image ls --format short --alsologtostderr                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image          │ functional-525396 image ls --format json --alsologtostderr                                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh            │ functional-525396 ssh pgrep buildkitd                                                                                                                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ image          │ functional-525396 image build -t localhost/my-image:functional-525396 testdata/build --alsologtostderr                                                     │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ image          │ functional-525396 image ls                                                                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ image          │ functional-525396 image ls --format yaml --alsologtostderr                                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ image          │ functional-525396 image ls --format table --alsologtostderr                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ update-context │ functional-525396 update-context --alsologtostderr -v=2                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ update-context │ functional-525396 update-context --alsologtostderr -v=2                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	│ update-context │ functional-525396 update-context --alsologtostderr -v=2                                                                                                    │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:53 UTC │ 08 Dec 25 00:53 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:52:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:52:45.574627  849050 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:52:45.574939  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.574973  849050 out.go:374] Setting ErrFile to fd 2...
	I1208 00:52:45.575000  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.575412  849050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:52:45.575930  849050 out.go:368] Setting JSON to false
	I1208 00:52:45.577075  849050 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20098,"bootTime":1765135068,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:52:45.577197  849050 start.go:143] virtualization:  
	I1208 00:52:45.581599  849050 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:52:45.584680  849050 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:52:45.584765  849050 notify.go:221] Checking for updates...
	I1208 00:52:45.590612  849050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:52:45.593456  849050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:52:45.596411  849050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:52:45.599251  849050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:52:45.602027  849050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:52:45.605459  849050 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:52:45.606098  849050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:52:45.639100  849050 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:52:45.639273  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.705725  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.696388169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.705834  849050 docker.go:319] overlay module found
	I1208 00:52:45.708920  849050 out.go:179] * Using the docker driver based on existing profile
	I1208 00:52:45.711815  849050 start.go:309] selected driver: docker
	I1208 00:52:45.711841  849050 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.711946  849050 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:52:45.712065  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.768465  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.759533195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.768916  849050 cni.go:84] Creating CNI manager for ""
	I1208 00:52:45.768986  849050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:52:45.769029  849050 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.771959  849050 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202409993Z" level=info msg="Checking image status: kicbase/echo-server:functional-525396" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.20261724Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202675653Z" level=info msg="Image kicbase/echo-server:functional-525396 not found" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202750763Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-525396 found" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.226753347Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-525396" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.227095365Z" level=info msg="Image docker.io/kicbase/echo-server:functional-525396 not found" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.227147993Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-525396 found" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.250465786Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-525396" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.25063314Z" level=info msg="Image localhost/kicbase/echo-server:functional-525396 not found" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.250671614Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-525396 found" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.30299607Z" level=info msg="Checking image status: kicbase/echo-server:functional-525396" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303150935Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303193028Z" level=info msg="Image kicbase/echo-server:functional-525396 not found" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303259712Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-525396 found" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332665565Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-525396" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332832213Z" level=info msg="Image docker.io/kicbase/echo-server:functional-525396 not found" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332886507Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-525396 found" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.3577301Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-525396" id=5794cfed-c2c5-4cb0-9514-1201b7bf8305 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:54:46.592850   25361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:54:46.593506   25361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:54:46.595214   25361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:54:46.595858   25361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:54:46.597540   25361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:54:46 up  5:36,  0 user,  load average: 0.24, 0.27, 0.41
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:54:44 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:54:44 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1289.
	Dec 08 00:54:44 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:44 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:44 functional-525396 kubelet[25234]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:44 functional-525396 kubelet[25234]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:44 functional-525396 kubelet[25234]: E1208 00:54:44.808273   25234 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:54:44 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:54:44 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:54:45 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1290.
	Dec 08 00:54:45 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:45 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:45 functional-525396 kubelet[25242]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:45 functional-525396 kubelet[25242]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:45 functional-525396 kubelet[25242]: E1208 00:54:45.553078   25242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:54:45 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:54:45 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:54:46 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1291.
	Dec 08 00:54:46 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:46 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:54:46 functional-525396 kubelet[25282]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:46 functional-525396 kubelet[25282]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:54:46 functional-525396 kubelet[25282]: E1208 00:54:46.309771   25282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:54:46 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:54:46 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (341.984246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-525396 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-525396 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (64.269156ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-525396 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-525396
helpers_test.go:243: (dbg) docker inspect functional-525396:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	        "Created": "2025-12-08T00:23:45.155317904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 820862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:23:45.250230726Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hostname",
	        "HostsPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/hosts",
	        "LogPath": "/var/lib/docker/containers/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067/6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067-json.log",
	        "Name": "/functional-525396",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-525396:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-525396",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6087be7eb41442f1528764894e008c6b62a85b07ebdc16186d83c4517aadd067",
	                "LowerDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bbef5025458500fe9a5f7051160b3137a6a29797c5493b4722cf969cfa2c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-525396",
	                "Source": "/var/lib/docker/volumes/functional-525396/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-525396",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-525396",
	                "name.minikube.sigs.k8s.io": "functional-525396",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bd3b93831eb1ce14e7ba9d44dfe78107d2550b6d8e517599fea3b0192787a4d",
	            "SandboxKey": "/var/run/docker/netns/9bd3b93831eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-525396": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:19:07:35:0b:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0acf5482583d1db2eb15c20feb1ce07c2696db4bcf0d04606bbd052c2b7c25d",
	                    "EndpointID": "7a9844675841bdcced7da72ac1d18ee1cebeb0e6a085ae72d1d331ea6f0e3283",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-525396",
	                        "6087be7eb414"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-525396 -n functional-525396: exit status 2 (317.446588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount3 --alsologtostderr -v=1                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ mount     │ -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount2 --alsologtostderr -v=1                      │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh findmnt -T /mount1                                                                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh findmnt -T /mount2                                                                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh findmnt -T /mount3                                                                                                                  │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ mount     │ -p functional-525396 --kill=true                                                                                                                          │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ start     │ -p functional-525396 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-525396 --alsologtostderr -v=1                                                                                            │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ ssh       │ functional-525396 ssh sudo systemctl is-active docker                                                                                                     │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ ssh       │ functional-525396 ssh sudo systemctl is-active containerd                                                                                                 │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │                     │
	│ image     │ functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image ls                                                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image ls                                                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image ls                                                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image save kicbase/echo-server:functional-525396 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image rm kicbase/echo-server:functional-525396 --alsologtostderr                                                                        │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image ls                                                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image ls                                                                                                                                │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	│ image     │ functional-525396 image save --daemon kicbase/echo-server:functional-525396 --alsologtostderr                                                             │ functional-525396 │ jenkins │ v1.37.0 │ 08 Dec 25 00:52 UTC │ 08 Dec 25 00:52 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:52:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:52:45.574627  849050 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:52:45.574939  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.574973  849050 out.go:374] Setting ErrFile to fd 2...
	I1208 00:52:45.575000  849050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.575412  849050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:52:45.575930  849050 out.go:368] Setting JSON to false
	I1208 00:52:45.577075  849050 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20098,"bootTime":1765135068,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:52:45.577197  849050 start.go:143] virtualization:  
	I1208 00:52:45.581599  849050 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:52:45.584680  849050 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:52:45.584765  849050 notify.go:221] Checking for updates...
	I1208 00:52:45.590612  849050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:52:45.593456  849050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:52:45.596411  849050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:52:45.599251  849050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:52:45.602027  849050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:52:45.605459  849050 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:52:45.606098  849050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:52:45.639100  849050 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:52:45.639273  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.705725  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.696388169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.705834  849050 docker.go:319] overlay module found
	I1208 00:52:45.708920  849050 out.go:179] * Using the docker driver based on existing profile
	I1208 00:52:45.711815  849050 start.go:309] selected driver: docker
	I1208 00:52:45.711841  849050 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.711946  849050 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:52:45.712065  849050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.768465  849050 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.759533195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.768916  849050 cni.go:84] Creating CNI manager for ""
	I1208 00:52:45.768986  849050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:52:45.769029  849050 start.go:353] cluster config:
	{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.771959  849050 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.379530292Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7d727b4f-816a-4502-9597-ea503bf0aee1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380164514Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=bf518cf5-2ff1-4087-a708-d83b92d9a896 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.380672424Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e84d8992-bd54-4d27-b704-b4150688f709 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381098578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1cca409d-3447-405a-9e1e-329c5f88d5cc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.381567621Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2d83155-ae4f-4891-a7d6-074729547c87 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382051203Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7cef86b7-fb7c-4597-855d-c4bfd350fbd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:46:36 functional-525396 crio[9946]: time="2025-12-08T00:46:36.382504016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8053aa82-1216-421d-89a3-d35cef80aff0 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202409993Z" level=info msg="Checking image status: kicbase/echo-server:functional-525396" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.20261724Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202675653Z" level=info msg="Image kicbase/echo-server:functional-525396 not found" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.202750763Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-525396 found" id=d02ef22a-8243-4edc-8386-3b061ace8562 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.226753347Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-525396" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.227095365Z" level=info msg="Image docker.io/kicbase/echo-server:functional-525396 not found" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.227147993Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-525396 found" id=7f8f1bcd-8dd1-4d69-b48b-19e9b1462a71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.250465786Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-525396" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.25063314Z" level=info msg="Image localhost/kicbase/echo-server:functional-525396 not found" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:49 functional-525396 crio[9946]: time="2025-12-08T00:52:49.250671614Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-525396 found" id=838f650f-3d03-4c0e-be39-dc22193ba7bb name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.30299607Z" level=info msg="Checking image status: kicbase/echo-server:functional-525396" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303150935Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303193028Z" level=info msg="Image kicbase/echo-server:functional-525396 not found" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.303259712Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-525396 found" id=42b1ff5a-2eb9-4a57-9ba5-db4d331976dc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332665565Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-525396" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332832213Z" level=info msg="Image docker.io/kicbase/echo-server:functional-525396 not found" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.332886507Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-525396 found" id=8f9433d0-d97e-4636-a35a-958c2d6444df name=/runtime.v1.ImageService/ImageStatus
	Dec 08 00:52:52 functional-525396 crio[9946]: time="2025-12-08T00:52:52.3577301Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-525396" id=5794cfed-c2c5-4cb0-9514-1201b7bf8305 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:52:54.808944   23872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:54.809706   23872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:54.811334   23872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:54.811843   23872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:52:54.813446   23872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +45.413534] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:25] overlayfs: idmapped layers are currently not supported
	[ +34.781620] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:26] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:27] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:28] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:30] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:31] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:42] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:44] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:45] overlayfs: idmapped layers are currently not supported
	[ +17.647904] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:46] overlayfs: idmapped layers are currently not supported
	[ +33.061086] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:47] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:49] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:57] overlayfs: idmapped layers are currently not supported
	[Dec 7 23:58] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:00] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 8 00:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 00:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:52:54 up  5:35,  0 user,  load average: 0.59, 0.32, 0.44
	Linux functional-525396 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:52:52 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 08 00:52:53 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:53 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:53 functional-525396 kubelet[23713]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:53 functional-525396 kubelet[23713]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:53 functional-525396 kubelet[23713]: E1208 00:52:53.069086   23713 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 08 00:52:53 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:53 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:53 functional-525396 kubelet[23768]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:53 functional-525396 kubelet[23768]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:53 functional-525396 kubelet[23768]: E1208 00:52:53.798534   23768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:53 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:52:54 functional-525396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 08 00:52:54 functional-525396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:54 functional-525396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:52:54 functional-525396 kubelet[23807]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:54 functional-525396 kubelet[23807]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 00:52:54 functional-525396 kubelet[23807]: E1208 00:52:54.566322   23807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:52:54 functional-525396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:52:54 functional-525396 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-525396 -n functional-525396: exit status 2 (323.314149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-525396" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1208 00:50:44.973712  844975 out.go:360] Setting OutFile to fd 1 ...
I1208 00:50:44.973970  844975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:50:44.973990  844975 out.go:374] Setting ErrFile to fd 2...
I1208 00:50:44.974008  844975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:50:44.974336  844975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:50:44.974693  844975 mustload.go:66] Loading cluster: functional-525396
I1208 00:50:44.975153  844975 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:50:44.975661  844975 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:50:45.035801  844975 host.go:66] Checking if "functional-525396" exists ...
I1208 00:50:45.036764  844975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:50:45.161645  844975 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 00:50:45.148684374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:50:45.161780  844975 api_server.go:166] Checking apiserver status ...
I1208 00:50:45.161844  844975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1208 00:50:45.161894  844975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:50:45.231860  844975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
W1208 00:50:45.366190  844975 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1208 00:50:45.367598  844975 out.go:179] * The control-plane node functional-525396 apiserver is not running: (state=Stopped)
I1208 00:50:45.368984  844975 out.go:179]   To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
stdout: * The control-plane node functional-525396 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-525396"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 844974: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-525396 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-525396 apply -f testdata/testsvc.yaml: exit status 1 (112.856948ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-525396 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (103.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.101.246.49": Temporary Error: Get "http://10.101.246.49": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-525396 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-525396 get svc nginx-svc: exit status 1 (65.317034ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-525396 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (103.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-525396 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-525396 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.595075ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-525396 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 service list: exit status 103 (266.419392ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-525396 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-525396 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-525396 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-525396\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 service list -o json: exit status 103 (254.980986ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-525396 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-525396 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 service --namespace=default --https --url hello-node: exit status 103 (288.297426ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-525396 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-525396 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 service hello-node --url --format={{.IP}}: exit status 103 (269.187305ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-525396 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-525396 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-525396 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-525396\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 service hello-node --url: exit status 103 (255.28032ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-525396 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-525396"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-525396 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-525396 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-525396"
functional_test.go:1579: failed to parse "* The control-plane node functional-525396 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-525396\"": parse "* The control-plane node functional-525396 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-525396\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765155156307702116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765155156307702116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765155156307702116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001/test-1765155156307702116
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.431213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:52:36.686447  791807 retry.go:31] will retry after 298.340658ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 00:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 00:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 00:52 test-1765155156307702116
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh cat /mount-9p/test-1765155156307702116
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-525396 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-525396 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.705213ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-525396 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (304.623364ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=46811)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  8 00:52 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  8 00:52 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  8 00:52 test-1765155156307702116
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-525396 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46811
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001:/mount-9p --alsologtostderr -v=1] stderr:
I1208 00:52:36.381663  847176 out.go:360] Setting OutFile to fd 1 ...
I1208 00:52:36.381810  847176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:36.381821  847176 out.go:374] Setting ErrFile to fd 2...
I1208 00:52:36.381827  847176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:36.382188  847176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:52:36.382509  847176 mustload.go:66] Loading cluster: functional-525396
I1208 00:52:36.383221  847176 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:36.383787  847176 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:52:36.408746  847176 host.go:66] Checking if "functional-525396" exists ...
I1208 00:52:36.409071  847176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:52:36.515519  847176 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 00:52:36.496999516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:52:36.515683  847176 cli_runner.go:164] Run: docker network inspect functional-525396 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 00:52:36.550834  847176 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001 into VM as /mount-9p ...
I1208 00:52:36.554015  847176 out.go:179]   - Mount type:   9p
I1208 00:52:36.556948  847176 out.go:179]   - User ID:      docker
I1208 00:52:36.560743  847176 out.go:179]   - Group ID:     docker
I1208 00:52:36.564440  847176 out.go:179]   - Version:      9p2000.L
I1208 00:52:36.568937  847176 out.go:179]   - Message Size: 262144
I1208 00:52:36.571835  847176 out.go:179]   - Options:      map[]
I1208 00:52:36.574667  847176 out.go:179]   - Bind Address: 192.168.49.1:46811
I1208 00:52:36.577567  847176 out.go:179] * Userspace file server: 
I1208 00:52:36.577911  847176 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1208 00:52:36.578005  847176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:52:36.607086  847176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:52:36.713675  847176 mount.go:180] unmount for /mount-9p ran successfully
I1208 00:52:36.713703  847176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1208 00:52:36.722066  847176 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46811,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1208 00:52:36.732591  847176 main.go:127] stdlog: ufs.go:141 connected
I1208 00:52:36.732757  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tversion tag 65535 msize 262144 version '9P2000.L'
I1208 00:52:36.732801  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rversion tag 65535 msize 262144 version '9P2000'
I1208 00:52:36.733067  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1208 00:52:36.733140  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rattach tag 0 aqid (ed710a fb72314e 'd')
I1208 00:52:36.733795  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 0
I1208 00:52:36.733856  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed710a fb72314e 'd') m d775 at 0 mt 1765155156 l 4096 t 0 d 0 ext )
I1208 00:52:36.736212  847176 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/.mount-process: {Name:mkba421b9de4b0651763d1e36aff587556a70e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:52:36.736429  847176 mount.go:105] mount successful: ""
I1208 00:52:36.739844  847176 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1260537005/001 to /mount-9p
I1208 00:52:36.742709  847176 out.go:203] 
I1208 00:52:36.745597  847176 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1208 00:52:37.567433  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 0
I1208 00:52:37.567532  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed710a fb72314e 'd') m d775 at 0 mt 1765155156 l 4096 t 0 d 0 ext )
I1208 00:52:37.567898  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 1 
I1208 00:52:37.567940  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 
I1208 00:52:37.568075  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Topen tag 0 fid 1 mode 0
I1208 00:52:37.568124  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Ropen tag 0 qid (ed710a fb72314e 'd') iounit 0
I1208 00:52:37.568258  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 0
I1208 00:52:37.568303  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed710a fb72314e 'd') m d775 at 0 mt 1765155156 l 4096 t 0 d 0 ext )
I1208 00:52:37.568469  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 0 count 262120
I1208 00:52:37.568583  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 258
I1208 00:52:37.568712  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 261862
I1208 00:52:37.568740  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:37.568878  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:52:37.568910  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:37.569042  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1208 00:52:37.569075  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710b fb72314e '') 
I1208 00:52:37.569218  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.569253  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed710b fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.569378  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.569413  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed710b fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.569551  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:37.569576  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:37.569715  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'test-1765155156307702116' 
I1208 00:52:37.569750  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710d fb72314e '') 
I1208 00:52:37.569876  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.569908  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.570023  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.570075  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.570222  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:37.570264  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:37.570386  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1208 00:52:37.570434  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710c fb72314e '') 
I1208 00:52:37.570609  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.570670  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed710c fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.570822  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:37.570875  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed710c fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.571001  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:37.571027  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:37.571162  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:52:37.571192  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:37.571320  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 1
I1208 00:52:37.571349  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:37.839184  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 1 0:'test-1765155156307702116' 
I1208 00:52:37.839264  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710d fb72314e '') 
I1208 00:52:37.839432  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 1
I1208 00:52:37.839477  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.839662  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 1 newfid 2 
I1208 00:52:37.839694  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 
I1208 00:52:37.839815  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Topen tag 0 fid 2 mode 0
I1208 00:52:37.839868  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Ropen tag 0 qid (ed710d fb72314e '') iounit 0
I1208 00:52:37.840013  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 1
I1208 00:52:37.840077  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:37.840212  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 2 offset 0 count 262120
I1208 00:52:37.840266  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 24
I1208 00:52:37.840402  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 2 offset 24 count 262120
I1208 00:52:37.840435  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:37.840579  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 2 offset 24 count 262120
I1208 00:52:37.840614  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:37.840765  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:37.840816  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:37.841029  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 1
I1208 00:52:37.841068  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.206649  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 0
I1208 00:52:38.206729  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed710a fb72314e 'd') m d775 at 0 mt 1765155156 l 4096 t 0 d 0 ext )
I1208 00:52:38.207241  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 1 
I1208 00:52:38.207307  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 
I1208 00:52:38.207465  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Topen tag 0 fid 1 mode 0
I1208 00:52:38.207525  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Ropen tag 0 qid (ed710a fb72314e 'd') iounit 0
I1208 00:52:38.207666  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 0
I1208 00:52:38.207710  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed710a fb72314e 'd') m d775 at 0 mt 1765155156 l 4096 t 0 d 0 ext )
I1208 00:52:38.207903  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 0 count 262120
I1208 00:52:38.208008  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 258
I1208 00:52:38.208179  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 261862
I1208 00:52:38.208205  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:38.208336  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:52:38.208360  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:38.208515  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1208 00:52:38.208570  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710b fb72314e '') 
I1208 00:52:38.208691  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.208725  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed710b fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.208863  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.208892  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed710b fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.209118  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:38.209139  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.209314  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'test-1765155156307702116' 
I1208 00:52:38.209376  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710d fb72314e '') 
I1208 00:52:38.209506  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.209542  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.209725  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.209758  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('test-1765155156307702116' 'jenkins' 'jenkins' '' q (ed710d fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.209882  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:38.209913  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.210074  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1208 00:52:38.210113  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rwalk tag 0 (ed710c fb72314e '') 
I1208 00:52:38.210225  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.210256  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed710c fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.210398  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tstat tag 0 fid 2
I1208 00:52:38.210443  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed710c fb72314e '') m 644 at 0 mt 1765155156 l 24 t 0 d 0 ext )
I1208 00:52:38.210565  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 2
I1208 00:52:38.210586  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.210697  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:52:38.210726  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rread tag 0 count 0
I1208 00:52:38.210885  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 1
I1208 00:52:38.210919  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.212292  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1208 00:52:38.212373  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rerror tag 0 ename 'file not found' ecode 0
I1208 00:52:38.488861  847176 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54546 Tclunk tag 0 fid 0
I1208 00:52:38.488934  847176 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54546 Rclunk tag 0
I1208 00:52:38.490016  847176 main.go:127] stdlog: ufs.go:147 disconnected
I1208 00:52:38.512473  847176 out.go:179] * Unmounting /mount-9p ...
I1208 00:52:38.515451  847176 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1208 00:52:38.522574  847176 mount.go:180] unmount for /mount-9p ran successfully
I1208 00:52:38.522689  847176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/.mount-process: {Name:mkba421b9de4b0651763d1e36aff587556a70e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:52:38.525750  847176 out.go:203] 
W1208 00:52:38.528650  847176 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1208 00:52:38.531439  847176 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.42s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-485214 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-485214 --output=json --user=testUser: exit status 80 (2.424308307s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf3ebbaf-8a56-4efe-af98-a40bd45bdd8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-485214 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e1731559-ed3c-4a11-9195-e445d02599e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-08T01:07:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"27d3ff15-09eb-4e7f-a033-da1459050f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-485214 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-485214 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-485214 --output=json --user=testUser: exit status 80 (1.87594963s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2b5a87d-2ab8-4feb-8236-388de7eb7d19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-485214 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"210b1d01-580f-4ed5-9f93-391107e33264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-08T01:07:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"00ad36e8-6223-4a22-b84e-b051e4281e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-485214 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (791.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.167791449s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-386622
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-386622: (1.463875405s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-386622 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-386622 status --format={{.Host}}: exit status 7 (100.338676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m24.503103078s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-386622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-386622" primary control-plane node in "kubernetes-upgrade-386622" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:23:56.587505  965470 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:23:56.587714  965470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:23:56.587725  965470 out.go:374] Setting ErrFile to fd 2...
	I1208 01:23:56.587730  965470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:23:56.588003  965470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:23:56.588417  965470 out.go:368] Setting JSON to false
	I1208 01:23:56.589408  965470 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21969,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:23:56.589470  965470 start.go:143] virtualization:  
	I1208 01:23:56.593913  965470 out.go:179] * [kubernetes-upgrade-386622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:23:56.597213  965470 notify.go:221] Checking for updates...
	I1208 01:23:56.598773  965470 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:23:56.604189  965470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:23:56.608635  965470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:23:56.611587  965470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:23:56.614486  965470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:23:56.617355  965470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:23:56.620695  965470 config.go:182] Loaded profile config "kubernetes-upgrade-386622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1208 01:23:56.621373  965470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:23:56.659035  965470 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:23:56.659207  965470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:23:56.755684  965470 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 01:23:56.746353755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:23:56.755795  965470 docker.go:319] overlay module found
	I1208 01:23:56.758830  965470 out.go:179] * Using the docker driver based on existing profile
	I1208 01:23:56.761801  965470 start.go:309] selected driver: docker
	I1208 01:23:56.761819  965470 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-386622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-386622 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:23:56.761992  965470 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:23:56.762951  965470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:23:56.851515  965470 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 01:23:56.841594767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:23:56.851865  965470 cni.go:84] Creating CNI manager for ""
	I1208 01:23:56.851940  965470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:23:56.851991  965470 start.go:353] cluster config:
	{Name:kubernetes-upgrade-386622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-386622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:23:56.855116  965470 out.go:179] * Starting "kubernetes-upgrade-386622" primary control-plane node in "kubernetes-upgrade-386622" cluster
	I1208 01:23:56.857999  965470 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:23:56.861875  965470 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:23:56.864873  965470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:23:56.864921  965470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:23:56.864931  965470 cache.go:65] Caching tarball of preloaded images
	I1208 01:23:56.865013  965470 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:23:56.865023  965470 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:23:56.865135  965470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/config.json ...
	I1208 01:23:56.865333  965470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:23:56.886669  965470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:23:56.886695  965470 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:23:56.886713  965470 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:23:56.886747  965470 start.go:360] acquireMachinesLock for kubernetes-upgrade-386622: {Name:mk2795d7d7da3dff856ccd1fa70948b70d74fd30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:23:56.886810  965470 start.go:364] duration metric: took 34.437µs to acquireMachinesLock for "kubernetes-upgrade-386622"
	I1208 01:23:56.886833  965470 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:23:56.886865  965470 fix.go:54] fixHost starting: 
	I1208 01:23:56.887163  965470 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-386622 --format={{.State.Status}}
	I1208 01:23:56.908663  965470 fix.go:112] recreateIfNeeded on kubernetes-upgrade-386622: state=Stopped err=<nil>
	W1208 01:23:56.908703  965470 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:23:56.911787  965470 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-386622" ...
	I1208 01:23:56.911872  965470 cli_runner.go:164] Run: docker start kubernetes-upgrade-386622
	I1208 01:23:57.237529  965470 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-386622 --format={{.State.Status}}
	I1208 01:23:57.269678  965470 kic.go:430] container "kubernetes-upgrade-386622" state is running.
	I1208 01:23:57.270063  965470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-386622
	I1208 01:23:57.302223  965470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/config.json ...
	I1208 01:23:57.302453  965470 machine.go:94] provisionDockerMachine start ...
	I1208 01:23:57.302522  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:23:57.328092  965470 main.go:143] libmachine: Using SSH client type: native
	I1208 01:23:57.328447  965470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33732 <nil> <nil>}
	I1208 01:23:57.328457  965470 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:23:57.329053  965470 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47814->127.0.0.1:33732: read: connection reset by peer
	I1208 01:24:00.555934  965470 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-386622
	
	I1208 01:24:00.555958  965470 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-386622"
	I1208 01:24:00.556053  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:00.582100  965470 main.go:143] libmachine: Using SSH client type: native
	I1208 01:24:00.582440  965470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33732 <nil> <nil>}
	I1208 01:24:00.582460  965470 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-386622 && echo "kubernetes-upgrade-386622" | sudo tee /etc/hostname
	I1208 01:24:00.764075  965470 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-386622
	
	I1208 01:24:00.764237  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:00.795080  965470 main.go:143] libmachine: Using SSH client type: native
	I1208 01:24:00.795423  965470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33732 <nil> <nil>}
	I1208 01:24:00.795447  965470 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-386622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-386622/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-386622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:24:00.959997  965470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:24:00.960079  965470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:24:00.960127  965470 ubuntu.go:190] setting up certificates
	I1208 01:24:00.960177  965470 provision.go:84] configureAuth start
	I1208 01:24:00.960271  965470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-386622
	I1208 01:24:00.985381  965470 provision.go:143] copyHostCerts
	I1208 01:24:00.985505  965470 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:24:00.985520  965470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:24:00.985602  965470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:24:00.985722  965470 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:24:00.985731  965470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:24:00.985760  965470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:24:00.985837  965470 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:24:00.985843  965470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:24:00.985871  965470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:24:00.985930  965470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-386622 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-386622 localhost minikube]
	I1208 01:24:01.198589  965470 provision.go:177] copyRemoteCerts
	I1208 01:24:01.198721  965470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:24:01.198804  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:01.218124  965470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33732 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kubernetes-upgrade-386622/id_rsa Username:docker}
	I1208 01:24:01.328342  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:24:01.351560  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 01:24:01.374745  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:24:01.397388  965470 provision.go:87] duration metric: took 437.168534ms to configureAuth
	I1208 01:24:01.397427  965470 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:24:01.397650  965470 config.go:182] Loaded profile config "kubernetes-upgrade-386622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:24:01.397776  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:01.417709  965470 main.go:143] libmachine: Using SSH client type: native
	I1208 01:24:01.418126  965470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33732 <nil> <nil>}
	I1208 01:24:01.418195  965470 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:24:01.796475  965470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:24:01.796545  965470 machine.go:97] duration metric: took 4.49407417s to provisionDockerMachine
	I1208 01:24:01.796570  965470 start.go:293] postStartSetup for "kubernetes-upgrade-386622" (driver="docker")
	I1208 01:24:01.796595  965470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:24:01.796712  965470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:24:01.796796  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:01.820570  965470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33732 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kubernetes-upgrade-386622/id_rsa Username:docker}
	I1208 01:24:01.929429  965470 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:24:01.934505  965470 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:24:01.934543  965470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:24:01.934561  965470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:24:01.934621  965470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:24:01.934716  965470 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:24:01.934886  965470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:24:01.945758  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:24:01.969839  965470 start.go:296] duration metric: took 173.240657ms for postStartSetup
	I1208 01:24:01.969967  965470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:24:01.970027  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:01.991906  965470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33732 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kubernetes-upgrade-386622/id_rsa Username:docker}
	I1208 01:24:02.097382  965470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:24:02.102975  965470 fix.go:56] duration metric: took 5.21610215s for fixHost
	I1208 01:24:02.102999  965470 start.go:83] releasing machines lock for "kubernetes-upgrade-386622", held for 5.216176587s
	I1208 01:24:02.103081  965470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-386622
	I1208 01:24:02.121045  965470 ssh_runner.go:195] Run: cat /version.json
	I1208 01:24:02.121110  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:02.121124  965470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:24:02.121192  965470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-386622
	I1208 01:24:02.152399  965470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33732 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kubernetes-upgrade-386622/id_rsa Username:docker}
	I1208 01:24:02.159807  965470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33732 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kubernetes-upgrade-386622/id_rsa Username:docker}
	I1208 01:24:02.388093  965470 ssh_runner.go:195] Run: systemctl --version
	I1208 01:24:02.395758  965470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:24:02.442047  965470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:24:02.447378  965470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:24:02.447501  965470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:24:02.460953  965470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:24:02.461028  965470 start.go:496] detecting cgroup driver to use...
	I1208 01:24:02.461080  965470 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:24:02.461155  965470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:24:02.478033  965470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:24:02.493046  965470 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:24:02.493164  965470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:24:02.510476  965470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:24:02.525253  965470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:24:02.666677  965470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:24:02.811467  965470 docker.go:234] disabling docker service ...
	I1208 01:24:02.811597  965470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:24:02.828513  965470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:24:02.843539  965470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:24:02.990238  965470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:24:03.165705  965470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:24:03.180822  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:24:03.196037  965470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:24:03.196102  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.205990  965470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:24:03.206052  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.216075  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.225722  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.236452  965470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:24:03.245561  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.257077  965470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.266437  965470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:24:03.276297  965470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:24:03.285460  965470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:24:03.293938  965470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:24:03.436463  965470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:24:03.655823  965470 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:24:03.655959  965470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:24:03.660176  965470 start.go:564] Will wait 60s for crictl version
	I1208 01:24:03.660303  965470 ssh_runner.go:195] Run: which crictl
	I1208 01:24:03.664493  965470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:24:03.694196  965470 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:24:03.694352  965470 ssh_runner.go:195] Run: crio --version
	I1208 01:24:03.731257  965470 ssh_runner.go:195] Run: crio --version
	I1208 01:24:03.768689  965470 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:24:03.771730  965470 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-386622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:24:03.805704  965470 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:24:03.815050  965470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:24:03.829271  965470 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-386622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-386622 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:24:03.829421  965470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:24:03.829483  965470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:24:03.885201  965470 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:24:03.885354  965470 ssh_runner.go:195] Run: which lz4
	I1208 01:24:03.889422  965470 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 01:24:03.893232  965470 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 01:24:03.893263  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1208 01:24:06.936038  965470 crio.go:462] duration metric: took 3.046660724s to copy over tarball
	I1208 01:24:06.936131  965470 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 01:24:09.324527  965470 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.388367822s)
	I1208 01:24:09.324565  965470 crio.go:469] duration metric: took 2.388472094s to extract the tarball
	I1208 01:24:09.324573  965470 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 01:24:09.384137  965470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:24:09.451191  965470 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:24:09.451212  965470 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:24:09.451219  965470 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:24:09.451325  965470 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-386622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-386622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:24:09.451405  965470 ssh_runner.go:195] Run: crio config
	I1208 01:24:09.539500  965470 cni.go:84] Creating CNI manager for ""
	I1208 01:24:09.539569  965470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:24:09.539607  965470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:24:09.539664  965470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-386622 NodeName:kubernetes-upgrade-386622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:24:09.539847  965470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-386622"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:24:09.539968  965470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:24:09.548894  965470 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:24:09.549015  965470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:24:09.563445  965470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1208 01:24:09.587163  965470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:24:09.609078  965470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1208 01:24:09.636586  965470 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:24:09.641238  965470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:24:09.658534  965470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:24:09.867415  965470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:24:09.908685  965470 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622 for IP: 192.168.76.2
	I1208 01:24:09.908702  965470 certs.go:195] generating shared ca certs ...
	I1208 01:24:09.908719  965470 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:24:09.908879  965470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:24:09.908924  965470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:24:09.908941  965470 certs.go:257] generating profile certs ...
	I1208 01:24:09.909028  965470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/client.key
	I1208 01:24:09.909093  965470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/apiserver.key.12e9cd06
	I1208 01:24:09.909136  965470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/proxy-client.key
	I1208 01:24:09.909248  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:24:09.909279  965470 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:24:09.909287  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:24:09.909315  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:24:09.909341  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:24:09.909364  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:24:09.909409  965470 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:24:09.909990  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:24:09.968523  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:24:10.016939  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:24:10.044119  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:24:10.072200  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1208 01:24:10.099335  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:24:10.127278  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:24:10.147888  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 01:24:10.184900  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:24:10.224722  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:24:10.255580  965470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:24:10.288243  965470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:24:10.313619  965470 ssh_runner.go:195] Run: openssl version
	I1208 01:24:10.323302  965470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:24:10.337400  965470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:24:10.349606  965470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:24:10.355442  965470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:24:10.355511  965470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:24:10.405705  965470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:24:10.419052  965470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:24:10.436929  965470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:24:10.450992  965470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:24:10.455361  965470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:24:10.455426  965470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:24:10.535696  965470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:24:10.544335  965470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:24:10.555492  965470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:24:10.570256  965470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:24:10.579691  965470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:24:10.579813  965470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:24:10.638013  965470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:24:10.645848  965470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:24:10.655201  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:24:10.709003  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:24:10.783371  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:24:10.860613  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:24:10.937473  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:24:10.980152  965470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:24:11.039560  965470 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-386622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-386622 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:24:11.039717  965470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:24:11.039828  965470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:24:11.117435  965470 cri.go:89] found id: ""
	I1208 01:24:11.117509  965470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:24:11.129499  965470 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:24:11.129574  965470 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:24:11.129675  965470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:24:11.140222  965470 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:24:11.140746  965470 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-386622" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:24:11.140936  965470 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-386622" cluster setting kubeconfig missing "kubernetes-upgrade-386622" context setting]
	I1208 01:24:11.141306  965470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:24:11.141959  965470 kapi.go:59] client config for kubernetes-upgrade-386622: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kubernetes-upgrade-386622/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:24:11.142939  965470 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 01:24:11.142991  965470 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 01:24:11.143030  965470 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 01:24:11.143053  965470 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 01:24:11.143079  965470 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 01:24:11.143503  965470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:24:11.162682  965470 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 01:23:28.991041389 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 01:24:09.631420362 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-386622"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1208 01:24:11.162751  965470 kubeadm.go:1161] stopping kube-system containers ...
	I1208 01:24:11.162776  965470 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 01:24:11.162901  965470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:24:11.217483  965470 cri.go:89] found id: ""
	I1208 01:24:11.217601  965470 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 01:24:11.235584  965470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:24:11.243556  965470 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  8 01:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  8 01:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  8 01:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  8 01:23 /etc/kubernetes/scheduler.conf
	
	I1208 01:24:11.243670  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:24:11.252048  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:24:11.264122  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:24:11.277511  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:24:11.277627  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:24:11.289019  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:24:11.299929  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:24:11.300065  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:24:11.314014  965470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:24:11.326654  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:24:11.426032  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:24:12.821869  965470 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.395754927s)
	I1208 01:24:12.821935  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:24:13.185287  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:24:13.324158  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:24:13.428011  965470 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:24:13.428149  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:13.928912  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:14.428514  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:14.928245  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:15.429002  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:15.928430  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:16.428514  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:16.929050  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:17.428283  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:17.928346  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:18.429179  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:18.929101  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:19.428278  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:19.928971  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:20.428288  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:20.928322  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:21.429047  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:21.928427  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:22.428793  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:22.928598  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:23.429252  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:23.928466  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:24.428298  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:24.928321  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:25.429159  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:25.928271  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:26.428819  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:26.929023  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:27.428256  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:27.928842  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:28.428407  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:28.928252  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:29.428467  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:29.928282  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:30.428418  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:30.929011  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:31.429029  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:31.928721  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:32.428272  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:32.928338  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:33.428600  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:33.928265  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:34.428909  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:34.928614  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:35.428926  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:35.928957  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:36.429087  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:36.928332  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:37.429025  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:37.928879  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:38.428895  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:38.929120  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:39.428517  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:39.928332  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:40.429130  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:40.928302  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:41.429019  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:41.928287  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:42.428296  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:42.928303  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:43.429084  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:43.929054  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:44.428335  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:44.929226  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:45.428996  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:45.928319  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:46.429270  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:46.928341  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:47.429109  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:47.929068  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:48.428880  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:48.928875  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:49.428666  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:49.929066  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:50.428316  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:50.929139  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:51.429136  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:51.929001  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:52.428928  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:52.928659  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:53.428326  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:53.929035  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:54.428336  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:54.928288  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:55.428875  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:55.928274  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:56.429008  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:56.928804  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:57.428305  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:57.928752  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:58.428302  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:58.929188  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:59.428359  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:59.929069  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:00.429164  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:00.929044  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:01.429110  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:01.928806  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:02.429009  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:02.928316  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:03.428314  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:03.928832  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:04.428878  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:04.928338  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:05.428754  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:05.928358  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:06.429062  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:06.928760  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:07.428347  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:07.929213  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:08.428919  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:08.928999  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:09.428923  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:09.928791  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:10.428308  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:10.928358  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:11.428228  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:11.929211  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:12.428994  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:12.928284  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:13.428997  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:13.429091  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:13.460692  965470 cri.go:89] found id: ""
	I1208 01:25:13.460720  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.460730  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:13.460736  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:13.460796  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:13.486396  965470 cri.go:89] found id: ""
	I1208 01:25:13.486424  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.486433  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:13.486440  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:13.486497  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:13.525965  965470 cri.go:89] found id: ""
	I1208 01:25:13.525991  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.526001  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:13.526007  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:13.526073  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:13.551968  965470 cri.go:89] found id: ""
	I1208 01:25:13.551998  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.552007  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:13.552014  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:13.552075  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:13.576776  965470 cri.go:89] found id: ""
	I1208 01:25:13.576799  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.576808  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:13.576814  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:13.576873  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:13.602538  965470 cri.go:89] found id: ""
	I1208 01:25:13.602563  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.602572  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:13.602579  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:13.602638  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:13.629779  965470 cri.go:89] found id: ""
	I1208 01:25:13.629802  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.629810  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:13.629817  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:13.629875  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:13.658356  965470 cri.go:89] found id: ""
	I1208 01:25:13.658383  965470 logs.go:282] 0 containers: []
	W1208 01:25:13.658394  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:13.658403  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:13.658414  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:13.728011  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:13.728051  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:13.749570  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:13.749656  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:13.946391  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:13.946413  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:13.946427  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:13.977925  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:13.977959  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:16.520069  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:16.530247  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:16.530348  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:16.556162  965470 cri.go:89] found id: ""
	I1208 01:25:16.556238  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.556262  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:16.556276  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:16.556355  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:16.580666  965470 cri.go:89] found id: ""
	I1208 01:25:16.580742  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.580760  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:16.580767  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:16.580840  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:16.608043  965470 cri.go:89] found id: ""
	I1208 01:25:16.608120  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.608130  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:16.608137  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:16.608209  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:16.641142  965470 cri.go:89] found id: ""
	I1208 01:25:16.641170  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.641180  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:16.641186  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:16.641249  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:16.668728  965470 cri.go:89] found id: ""
	I1208 01:25:16.668756  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.668764  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:16.668771  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:16.668831  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:16.714063  965470 cri.go:89] found id: ""
	I1208 01:25:16.714088  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.714097  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:16.714114  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:16.714174  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:16.751925  965470 cri.go:89] found id: ""
	I1208 01:25:16.751951  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.751960  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:16.751966  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:16.752023  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:16.781437  965470 cri.go:89] found id: ""
	I1208 01:25:16.781462  965470 logs.go:282] 0 containers: []
	W1208 01:25:16.781471  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:16.781480  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:16.781494  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:16.823353  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:16.823427  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:16.899855  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:16.899892  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:16.923560  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:16.923593  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:17.011254  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:17.011277  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:17.011295  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:19.550379  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:19.560241  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:19.560313  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:19.587706  965470 cri.go:89] found id: ""
	I1208 01:25:19.587729  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.587737  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:19.587744  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:19.587801  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:19.613728  965470 cri.go:89] found id: ""
	I1208 01:25:19.613753  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.613762  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:19.613768  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:19.613825  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:19.641536  965470 cri.go:89] found id: ""
	I1208 01:25:19.641563  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.641572  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:19.641578  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:19.641688  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:19.667210  965470 cri.go:89] found id: ""
	I1208 01:25:19.667235  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.667244  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:19.667250  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:19.667309  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:19.693162  965470 cri.go:89] found id: ""
	I1208 01:25:19.693184  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.693192  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:19.693199  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:19.693256  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:19.718060  965470 cri.go:89] found id: ""
	I1208 01:25:19.718083  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.718092  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:19.718114  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:19.718174  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:19.745460  965470 cri.go:89] found id: ""
	I1208 01:25:19.745483  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.745492  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:19.745498  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:19.745556  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:19.773754  965470 cri.go:89] found id: ""
	I1208 01:25:19.773821  965470 logs.go:282] 0 containers: []
	W1208 01:25:19.773845  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:19.773866  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:19.773903  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:19.804562  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:19.804595  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:19.838420  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:19.838449  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:19.905660  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:19.905696  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:19.924365  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:19.924394  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:20.001378  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:22.501676  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:22.512235  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:22.512322  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:22.539010  965470 cri.go:89] found id: ""
	I1208 01:25:22.539073  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.539089  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:22.539097  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:22.539173  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:22.565705  965470 cri.go:89] found id: ""
	I1208 01:25:22.565731  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.565740  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:22.565746  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:22.565816  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:22.593485  965470 cri.go:89] found id: ""
	I1208 01:25:22.593565  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.593586  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:22.593593  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:22.593670  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:22.626125  965470 cri.go:89] found id: ""
	I1208 01:25:22.626151  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.626160  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:22.626166  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:22.626254  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:22.651561  965470 cri.go:89] found id: ""
	I1208 01:25:22.651584  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.651593  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:22.651599  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:22.651658  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:22.677018  965470 cri.go:89] found id: ""
	I1208 01:25:22.677042  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.677051  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:22.677058  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:22.677115  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:22.706493  965470 cri.go:89] found id: ""
	I1208 01:25:22.706566  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.706589  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:22.706613  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:22.706705  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:22.733264  965470 cri.go:89] found id: ""
	I1208 01:25:22.733339  965470 logs.go:282] 0 containers: []
	W1208 01:25:22.733360  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:22.733382  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:22.733419  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:22.764194  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:22.764219  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:22.831417  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:22.831452  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:22.849892  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:22.849924  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:22.920602  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:22.920622  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:22.920634  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:25.452058  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:25.462137  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:25.462209  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:25.487529  965470 cri.go:89] found id: ""
	I1208 01:25:25.487553  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.487562  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:25.487568  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:25.487638  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:25.512351  965470 cri.go:89] found id: ""
	I1208 01:25:25.512375  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.512384  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:25.512390  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:25.512446  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:25.544349  965470 cri.go:89] found id: ""
	I1208 01:25:25.544375  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.544384  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:25.544391  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:25.544450  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:25.570297  965470 cri.go:89] found id: ""
	I1208 01:25:25.570332  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.570341  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:25.570347  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:25.570406  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:25.599629  965470 cri.go:89] found id: ""
	I1208 01:25:25.599652  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.599660  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:25.599667  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:25.599742  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:25.625964  965470 cri.go:89] found id: ""
	I1208 01:25:25.625997  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.626006  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:25.626012  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:25.626099  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:25.651085  965470 cri.go:89] found id: ""
	I1208 01:25:25.651116  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.651124  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:25.651131  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:25.651196  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:25.681206  965470 cri.go:89] found id: ""
	I1208 01:25:25.681232  965470 logs.go:282] 0 containers: []
	W1208 01:25:25.681241  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:25.681251  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:25.681281  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:25.749395  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:25.749431  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:25.767488  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:25.767516  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:25.837132  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:25.837154  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:25.837167  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:25.868527  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:25.868565  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:28.402615  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:28.413352  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:28.413424  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:28.438752  965470 cri.go:89] found id: ""
	I1208 01:25:28.438777  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.438787  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:28.438793  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:28.438875  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:28.465044  965470 cri.go:89] found id: ""
	I1208 01:25:28.465068  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.465076  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:28.465082  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:28.465139  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:28.491679  965470 cri.go:89] found id: ""
	I1208 01:25:28.491701  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.491710  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:28.491716  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:28.491773  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:28.522180  965470 cri.go:89] found id: ""
	I1208 01:25:28.522201  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.522221  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:28.522228  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:28.522289  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:28.549736  965470 cri.go:89] found id: ""
	I1208 01:25:28.549759  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.549767  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:28.549774  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:28.549833  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:28.575046  965470 cri.go:89] found id: ""
	I1208 01:25:28.575072  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.575081  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:28.575088  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:28.575146  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:28.602615  965470 cri.go:89] found id: ""
	I1208 01:25:28.602684  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.602707  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:28.602725  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:28.602816  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:28.627497  965470 cri.go:89] found id: ""
	I1208 01:25:28.627520  965470 logs.go:282] 0 containers: []
	W1208 01:25:28.627528  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:28.627537  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:28.627554  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:28.697860  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:28.697906  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:28.722809  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:28.722941  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:28.789117  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:28.789142  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:28.789157  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:28.819627  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:28.819661  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:31.349528  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:31.359589  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:31.359661  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:31.384466  965470 cri.go:89] found id: ""
	I1208 01:25:31.384494  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.384503  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:31.384510  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:31.384568  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:31.413684  965470 cri.go:89] found id: ""
	I1208 01:25:31.413709  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.413718  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:31.413736  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:31.413815  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:31.440213  965470 cri.go:89] found id: ""
	I1208 01:25:31.440238  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.440247  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:31.440254  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:31.440312  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:31.464751  965470 cri.go:89] found id: ""
	I1208 01:25:31.464776  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.464785  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:31.464792  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:31.464872  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:31.491129  965470 cri.go:89] found id: ""
	I1208 01:25:31.491161  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.491170  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:31.491176  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:31.491244  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:31.517240  965470 cri.go:89] found id: ""
	I1208 01:25:31.517273  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.517282  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:31.517289  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:31.517356  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:31.542368  965470 cri.go:89] found id: ""
	I1208 01:25:31.542393  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.542403  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:31.542409  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:31.542475  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:31.567217  965470 cri.go:89] found id: ""
	I1208 01:25:31.567241  965470 logs.go:282] 0 containers: []
	W1208 01:25:31.567250  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:31.567268  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:31.567280  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:31.597454  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:31.597489  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:31.626032  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:31.626065  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:31.693347  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:31.693380  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:31.711166  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:31.711195  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:31.773747  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:34.274990  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:34.286556  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:34.286627  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:34.327859  965470 cri.go:89] found id: ""
	I1208 01:25:34.327881  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.327890  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:34.327896  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:34.327953  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:34.377946  965470 cri.go:89] found id: ""
	I1208 01:25:34.377970  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.377980  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:34.377987  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:34.378050  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:34.410568  965470 cri.go:89] found id: ""
	I1208 01:25:34.410593  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.410602  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:34.410608  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:34.410668  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:34.446283  965470 cri.go:89] found id: ""
	I1208 01:25:34.446308  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.446318  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:34.446324  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:34.446383  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:34.490592  965470 cri.go:89] found id: ""
	I1208 01:25:34.490617  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.490626  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:34.490632  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:34.490697  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:34.522424  965470 cri.go:89] found id: ""
	I1208 01:25:34.522450  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.522460  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:34.522467  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:34.522529  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:34.549807  965470 cri.go:89] found id: ""
	I1208 01:25:34.549833  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.549843  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:34.549849  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:34.549914  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:34.575841  965470 cri.go:89] found id: ""
	I1208 01:25:34.575864  965470 logs.go:282] 0 containers: []
	W1208 01:25:34.575873  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:34.575882  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:34.575894  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:34.593920  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:34.593951  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:34.660968  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:34.660989  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:34.661001  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:34.693309  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:34.693343  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:34.720688  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:34.720713  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:37.291135  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:37.303929  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:37.303994  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:37.340051  965470 cri.go:89] found id: ""
	I1208 01:25:37.340072  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.340080  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:37.340095  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:37.340225  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:37.389019  965470 cri.go:89] found id: ""
	I1208 01:25:37.389101  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.389123  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:37.389141  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:37.389239  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:37.425197  965470 cri.go:89] found id: ""
	I1208 01:25:37.425230  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.425238  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:37.425244  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:37.425309  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:37.466787  965470 cri.go:89] found id: ""
	I1208 01:25:37.466892  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.466916  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:37.466936  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:37.467026  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:37.500997  965470 cri.go:89] found id: ""
	I1208 01:25:37.501095  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.501128  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:37.501173  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:37.501273  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:37.539927  965470 cri.go:89] found id: ""
	I1208 01:25:37.539954  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.539963  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:37.539970  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:37.540074  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:37.583485  965470 cri.go:89] found id: ""
	I1208 01:25:37.583511  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.583519  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:37.583526  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:37.583587  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:37.622913  965470 cri.go:89] found id: ""
	I1208 01:25:37.622936  965470 logs.go:282] 0 containers: []
	W1208 01:25:37.622945  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:37.622963  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:37.622976  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:37.678355  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:37.678452  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:37.741779  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:37.741855  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:37.839682  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:37.839720  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:37.859558  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:37.859590  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:38.032350  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:40.533776  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:40.543604  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:40.543675  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:40.572337  965470 cri.go:89] found id: ""
	I1208 01:25:40.572362  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.572371  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:40.572377  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:40.572434  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:40.596765  965470 cri.go:89] found id: ""
	I1208 01:25:40.596791  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.596800  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:40.596807  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:40.596864  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:40.624042  965470 cri.go:89] found id: ""
	I1208 01:25:40.624069  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.624078  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:40.624084  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:40.624142  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:40.649816  965470 cri.go:89] found id: ""
	I1208 01:25:40.649836  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.649845  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:40.649852  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:40.649914  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:40.674776  965470 cri.go:89] found id: ""
	I1208 01:25:40.674801  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.674809  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:40.674815  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:40.674898  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:40.701491  965470 cri.go:89] found id: ""
	I1208 01:25:40.701510  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.701519  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:40.701525  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:40.701581  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:40.726523  965470 cri.go:89] found id: ""
	I1208 01:25:40.726543  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.726552  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:40.726558  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:40.726614  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:40.754834  965470 cri.go:89] found id: ""
	I1208 01:25:40.754875  965470 logs.go:282] 0 containers: []
	W1208 01:25:40.754884  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:40.754892  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:40.754903  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:40.785669  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:40.785704  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:40.852588  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:40.852624  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:40.871201  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:40.871232  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:40.932199  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:40.932231  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:40.932260  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:43.462907  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:43.473409  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:43.473477  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:43.499044  965470 cri.go:89] found id: ""
	I1208 01:25:43.499071  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.499080  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:43.499087  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:43.499148  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:43.525965  965470 cri.go:89] found id: ""
	I1208 01:25:43.525988  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.525996  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:43.526010  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:43.526067  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:43.554280  965470 cri.go:89] found id: ""
	I1208 01:25:43.554305  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.554313  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:43.554320  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:43.554383  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:43.580780  965470 cri.go:89] found id: ""
	I1208 01:25:43.580802  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.580811  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:43.580817  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:43.580907  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:43.621387  965470 cri.go:89] found id: ""
	I1208 01:25:43.621409  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.621418  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:43.621424  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:43.621483  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:43.648642  965470 cri.go:89] found id: ""
	I1208 01:25:43.648702  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.648726  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:43.648745  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:43.648811  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:43.674355  965470 cri.go:89] found id: ""
	I1208 01:25:43.674381  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.674390  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:43.674397  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:43.674455  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:43.702588  965470 cri.go:89] found id: ""
	I1208 01:25:43.702609  965470 logs.go:282] 0 containers: []
	W1208 01:25:43.702617  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:43.702625  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:43.702639  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:43.776405  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:43.776447  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:43.795001  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:43.795077  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:43.856866  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:43.856888  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:43.856902  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:43.886620  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:43.886658  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:46.416261  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:46.427205  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:46.427286  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:46.454574  965470 cri.go:89] found id: ""
	I1208 01:25:46.454598  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.454607  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:46.454613  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:46.454672  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:46.483867  965470 cri.go:89] found id: ""
	I1208 01:25:46.483890  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.483898  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:46.483904  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:46.483962  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:46.510258  965470 cri.go:89] found id: ""
	I1208 01:25:46.510283  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.510293  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:46.510299  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:46.510360  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:46.535364  965470 cri.go:89] found id: ""
	I1208 01:25:46.535387  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.535401  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:46.535407  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:46.535468  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:46.565909  965470 cri.go:89] found id: ""
	I1208 01:25:46.565932  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.565941  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:46.565947  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:46.566007  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:46.596486  965470 cri.go:89] found id: ""
	I1208 01:25:46.596515  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.596524  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:46.596530  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:46.596595  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:46.622202  965470 cri.go:89] found id: ""
	I1208 01:25:46.622229  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.622238  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:46.622245  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:46.622306  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:46.649700  965470 cri.go:89] found id: ""
	I1208 01:25:46.649724  965470 logs.go:282] 0 containers: []
	W1208 01:25:46.649733  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:46.649742  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:46.649753  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:46.717038  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:46.717075  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:46.734803  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:46.734861  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:46.802542  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:46.802562  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:46.802574  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:46.834511  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:46.834542  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:49.365389  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:49.379261  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:49.379327  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:49.411791  965470 cri.go:89] found id: ""
	I1208 01:25:49.411812  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.411821  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:49.411828  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:49.411889  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:49.453171  965470 cri.go:89] found id: ""
	I1208 01:25:49.453192  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.453200  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:49.453207  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:49.453268  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:49.485495  965470 cri.go:89] found id: ""
	I1208 01:25:49.485530  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.485539  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:49.485546  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:49.485614  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:49.523560  965470 cri.go:89] found id: ""
	I1208 01:25:49.523583  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.523595  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:49.523607  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:49.523672  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:49.554141  965470 cri.go:89] found id: ""
	I1208 01:25:49.554218  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.554240  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:49.554259  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:49.554357  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:49.587705  965470 cri.go:89] found id: ""
	I1208 01:25:49.587775  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.587799  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:49.587818  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:49.587908  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:49.620344  965470 cri.go:89] found id: ""
	I1208 01:25:49.620422  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.620443  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:49.620460  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:49.620551  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:49.650568  965470 cri.go:89] found id: ""
	I1208 01:25:49.650632  965470 logs.go:282] 0 containers: []
	W1208 01:25:49.650662  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:49.650685  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:49.650722  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:49.730474  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:49.730557  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:49.749960  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:49.749991  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:49.820978  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:49.820998  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:49.821011  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:49.852697  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:49.852730  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:52.382776  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:52.392933  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:52.393001  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:52.419464  965470 cri.go:89] found id: ""
	I1208 01:25:52.419487  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.419496  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:52.419502  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:52.419567  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:52.446628  965470 cri.go:89] found id: ""
	I1208 01:25:52.446696  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.446716  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:52.446741  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:52.446827  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:52.474657  965470 cri.go:89] found id: ""
	I1208 01:25:52.474732  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.474756  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:52.474774  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:52.474866  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:52.502465  965470 cri.go:89] found id: ""
	I1208 01:25:52.502546  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.502568  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:52.502590  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:52.502734  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:52.529743  965470 cri.go:89] found id: ""
	I1208 01:25:52.529818  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.529843  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:52.529861  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:52.529932  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:52.555812  965470 cri.go:89] found id: ""
	I1208 01:25:52.555838  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.555846  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:52.555853  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:52.555911  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:52.582157  965470 cri.go:89] found id: ""
	I1208 01:25:52.582183  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.582192  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:52.582198  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:52.582261  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:52.612655  965470 cri.go:89] found id: ""
	I1208 01:25:52.612682  965470 logs.go:282] 0 containers: []
	W1208 01:25:52.612691  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:52.612700  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:52.612716  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:52.632559  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:52.632603  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:52.700198  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:52.700216  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:52.700228  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:52.731807  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:52.731839  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:52.761438  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:52.761475  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:55.334616  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:55.345460  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:55.345532  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:55.373126  965470 cri.go:89] found id: ""
	I1208 01:25:55.373149  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.373166  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:55.373172  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:55.373233  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:55.397702  965470 cri.go:89] found id: ""
	I1208 01:25:55.397724  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.397732  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:55.397744  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:55.397804  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:55.425244  965470 cri.go:89] found id: ""
	I1208 01:25:55.425267  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.425276  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:55.425282  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:55.425340  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:55.453299  965470 cri.go:89] found id: ""
	I1208 01:25:55.453321  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.453329  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:55.453336  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:55.453395  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:55.482720  965470 cri.go:89] found id: ""
	I1208 01:25:55.482741  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.482750  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:55.482756  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:55.482826  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:55.508810  965470 cri.go:89] found id: ""
	I1208 01:25:55.508887  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.508903  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:55.508910  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:55.508990  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:55.535434  965470 cri.go:89] found id: ""
	I1208 01:25:55.535457  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.535465  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:55.535471  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:55.535533  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:55.561946  965470 cri.go:89] found id: ""
	I1208 01:25:55.561968  965470 logs.go:282] 0 containers: []
	W1208 01:25:55.561976  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:55.561986  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:55.561997  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:55.630421  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:55.630458  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:55.649295  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:55.649323  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:55.719301  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:55.719366  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:55.719393  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:55.750977  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:55.751013  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:58.286037  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:58.297517  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:58.297589  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:58.328518  965470 cri.go:89] found id: ""
	I1208 01:25:58.328540  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.328549  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:25:58.328555  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:25:58.328621  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:58.358334  965470 cri.go:89] found id: ""
	I1208 01:25:58.358357  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.358365  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:25:58.358371  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:25:58.358430  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:58.384347  965470 cri.go:89] found id: ""
	I1208 01:25:58.384370  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.384379  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:25:58.384385  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:58.384446  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:58.410461  965470 cri.go:89] found id: ""
	I1208 01:25:58.410482  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.410491  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:25:58.410498  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:58.410563  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:58.436206  965470 cri.go:89] found id: ""
	I1208 01:25:58.436229  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.436238  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:58.436246  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:58.436308  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:58.462345  965470 cri.go:89] found id: ""
	I1208 01:25:58.462372  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.462381  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:25:58.462387  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:58.462448  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:58.488288  965470 cri.go:89] found id: ""
	I1208 01:25:58.488314  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.488324  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:58.488330  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:58.488399  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:58.513919  965470 cri.go:89] found id: ""
	I1208 01:25:58.513944  965470 logs.go:282] 0 containers: []
	W1208 01:25:58.513953  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:58.513961  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:58.513973  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:58.532681  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:58.532761  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:58.608676  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:58.608699  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:25:58.608717  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:25:58.641312  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:25:58.641352  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:58.677079  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:58.677112  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:01.254359  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:01.265998  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:01.266079  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:01.305130  965470 cri.go:89] found id: ""
	I1208 01:26:01.305157  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.305166  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:01.305172  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:01.305232  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:01.336272  965470 cri.go:89] found id: ""
	I1208 01:26:01.336298  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.336307  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:01.336314  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:01.336380  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:01.364924  965470 cri.go:89] found id: ""
	I1208 01:26:01.364951  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.364960  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:01.364966  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:01.365030  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:01.391744  965470 cri.go:89] found id: ""
	I1208 01:26:01.391774  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.391783  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:01.391790  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:01.391853  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:01.420006  965470 cri.go:89] found id: ""
	I1208 01:26:01.420029  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.420040  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:01.420046  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:01.420107  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:01.447852  965470 cri.go:89] found id: ""
	I1208 01:26:01.447932  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.447957  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:01.447976  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:01.448057  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:01.473842  965470 cri.go:89] found id: ""
	I1208 01:26:01.473864  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.473874  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:01.473880  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:01.473939  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:01.500124  965470 cri.go:89] found id: ""
	I1208 01:26:01.500146  965470 logs.go:282] 0 containers: []
	W1208 01:26:01.500156  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:01.500165  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:01.500177  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:01.568872  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:01.568912  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:01.587640  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:01.587670  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:01.658635  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:01.658661  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:01.658674  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:01.690718  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:01.690874  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:04.223372  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:04.234150  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:04.234232  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:04.269167  965470 cri.go:89] found id: ""
	I1208 01:26:04.269188  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.269196  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:04.269202  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:04.269259  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:04.297530  965470 cri.go:89] found id: ""
	I1208 01:26:04.297558  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.297567  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:04.297578  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:04.297635  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:04.333050  965470 cri.go:89] found id: ""
	I1208 01:26:04.333075  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.333084  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:04.333091  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:04.333152  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:04.359690  965470 cri.go:89] found id: ""
	I1208 01:26:04.359713  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.359721  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:04.359729  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:04.359790  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:04.386666  965470 cri.go:89] found id: ""
	I1208 01:26:04.386687  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.386696  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:04.386702  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:04.386762  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:04.413337  965470 cri.go:89] found id: ""
	I1208 01:26:04.413360  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.413368  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:04.413375  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:04.413434  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:04.443969  965470 cri.go:89] found id: ""
	I1208 01:26:04.443997  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.444007  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:04.444014  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:04.444084  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:04.470373  965470 cri.go:89] found id: ""
	I1208 01:26:04.470399  965470 logs.go:282] 0 containers: []
	W1208 01:26:04.470409  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:04.470417  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:04.470430  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:04.541009  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:04.541028  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:04.541040  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:04.571999  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:04.572037  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:04.600621  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:04.600646  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:04.668057  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:04.668098  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:07.186257  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:07.196478  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:07.196546  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:07.226404  965470 cri.go:89] found id: ""
	I1208 01:26:07.226432  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.226440  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:07.226446  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:07.226505  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:07.256945  965470 cri.go:89] found id: ""
	I1208 01:26:07.256972  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.256982  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:07.256988  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:07.257046  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:07.292741  965470 cri.go:89] found id: ""
	I1208 01:26:07.292766  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.292781  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:07.292787  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:07.292849  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:07.322987  965470 cri.go:89] found id: ""
	I1208 01:26:07.323014  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.323024  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:07.323030  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:07.323095  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:07.351058  965470 cri.go:89] found id: ""
	I1208 01:26:07.351085  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.351094  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:07.351100  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:07.351168  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:07.378261  965470 cri.go:89] found id: ""
	I1208 01:26:07.378288  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.378298  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:07.378304  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:07.378363  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:07.403202  965470 cri.go:89] found id: ""
	I1208 01:26:07.403225  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.403233  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:07.403239  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:07.403296  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:07.429937  965470 cri.go:89] found id: ""
	I1208 01:26:07.429969  965470 logs.go:282] 0 containers: []
	W1208 01:26:07.429978  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:07.429988  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:07.430000  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:07.448110  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:07.448198  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:07.510913  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:07.510976  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:07.511001  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:07.542188  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:07.542225  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:07.569622  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:07.569651  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:10.137736  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:10.149360  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:10.149452  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:10.179770  965470 cri.go:89] found id: ""
	I1208 01:26:10.179797  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.179807  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:10.179814  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:10.179876  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:10.208053  965470 cri.go:89] found id: ""
	I1208 01:26:10.208078  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.208087  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:10.208093  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:10.208151  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:10.272210  965470 cri.go:89] found id: ""
	I1208 01:26:10.272234  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.272243  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:10.272249  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:10.272312  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:10.309618  965470 cri.go:89] found id: ""
	I1208 01:26:10.309640  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.309648  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:10.309655  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:10.309724  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:10.357672  965470 cri.go:89] found id: ""
	I1208 01:26:10.357694  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.357703  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:10.357709  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:10.357779  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:10.412437  965470 cri.go:89] found id: ""
	I1208 01:26:10.412458  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.412467  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:10.412474  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:10.412537  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:10.446022  965470 cri.go:89] found id: ""
	I1208 01:26:10.446043  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.446063  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:10.446069  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:10.446127  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:10.480875  965470 cri.go:89] found id: ""
	I1208 01:26:10.480898  965470 logs.go:282] 0 containers: []
	W1208 01:26:10.480907  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:10.480916  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:10.480930  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:10.561303  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:10.561383  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:10.581948  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:10.582023  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:10.682552  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:10.682575  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:10.682588  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:10.716122  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:10.716158  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:13.246970  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:13.258142  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:13.258214  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:13.291694  965470 cri.go:89] found id: ""
	I1208 01:26:13.291714  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.291722  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:13.291730  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:13.291783  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:13.320793  965470 cri.go:89] found id: ""
	I1208 01:26:13.320815  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.320824  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:13.320830  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:13.320891  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:13.358361  965470 cri.go:89] found id: ""
	I1208 01:26:13.358385  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.358393  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:13.358399  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:13.358457  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:13.391805  965470 cri.go:89] found id: ""
	I1208 01:26:13.391872  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.391892  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:13.391912  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:13.391999  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:13.437446  965470 cri.go:89] found id: ""
	I1208 01:26:13.437475  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.437484  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:13.437490  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:13.437557  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:13.472948  965470 cri.go:89] found id: ""
	I1208 01:26:13.472972  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.472980  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:13.472986  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:13.473036  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:13.502246  965470 cri.go:89] found id: ""
	I1208 01:26:13.502274  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.502283  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:13.502296  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:13.502358  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:13.586198  965470 cri.go:89] found id: ""
	I1208 01:26:13.586221  965470 logs.go:282] 0 containers: []
	W1208 01:26:13.586230  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:13.586239  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:13.586251  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:13.707228  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:13.707332  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:13.736869  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:13.736951  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:13.831415  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:13.831484  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:13.831511  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:13.870520  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:13.870604  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:16.407890  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:16.418186  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:16.418264  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:16.444317  965470 cri.go:89] found id: ""
	I1208 01:26:16.444341  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.444349  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:16.444355  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:16.444413  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:16.469009  965470 cri.go:89] found id: ""
	I1208 01:26:16.469033  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.469042  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:16.469048  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:16.469107  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:16.501459  965470 cri.go:89] found id: ""
	I1208 01:26:16.501485  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.501494  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:16.501509  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:16.501582  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:16.536459  965470 cri.go:89] found id: ""
	I1208 01:26:16.536482  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.536497  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:16.536504  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:16.536563  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:16.574814  965470 cri.go:89] found id: ""
	I1208 01:26:16.574855  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.574865  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:16.574871  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:16.574932  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:16.601771  965470 cri.go:89] found id: ""
	I1208 01:26:16.601800  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.601810  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:16.601817  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:16.601877  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:16.628222  965470 cri.go:89] found id: ""
	I1208 01:26:16.628247  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.628256  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:16.628263  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:16.628343  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:16.655889  965470 cri.go:89] found id: ""
	I1208 01:26:16.655916  965470 logs.go:282] 0 containers: []
	W1208 01:26:16.655925  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:16.655933  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:16.655945  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:16.723523  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:16.723561  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:16.742991  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:16.743024  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:16.846713  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:16.846782  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:16.846809  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:16.883812  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:16.883901  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:19.418957  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:19.429339  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:19.429411  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:19.455899  965470 cri.go:89] found id: ""
	I1208 01:26:19.455923  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.455932  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:19.455939  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:19.456002  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:19.480936  965470 cri.go:89] found id: ""
	I1208 01:26:19.480961  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.480970  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:19.480983  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:19.481046  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:19.517155  965470 cri.go:89] found id: ""
	I1208 01:26:19.517177  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.517186  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:19.517193  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:19.517250  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:19.547773  965470 cri.go:89] found id: ""
	I1208 01:26:19.547797  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.547806  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:19.547813  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:19.547876  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:19.582959  965470 cri.go:89] found id: ""
	I1208 01:26:19.582982  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.582991  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:19.582998  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:19.583058  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:19.608523  965470 cri.go:89] found id: ""
	I1208 01:26:19.608545  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.608553  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:19.608560  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:19.608616  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:19.633690  965470 cri.go:89] found id: ""
	I1208 01:26:19.633712  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.633720  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:19.633725  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:19.633782  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:19.660511  965470 cri.go:89] found id: ""
	I1208 01:26:19.660535  965470 logs.go:282] 0 containers: []
	W1208 01:26:19.660544  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:19.660553  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:19.660565  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:19.678517  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:19.678546  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:19.743931  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:19.743993  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:19.744021  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:19.775184  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:19.775218  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:19.803114  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:19.803140  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:22.372630  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:22.382776  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:22.382868  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:22.417428  965470 cri.go:89] found id: ""
	I1208 01:26:22.417457  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.417467  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:22.417473  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:22.417536  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:22.445572  965470 cri.go:89] found id: ""
	I1208 01:26:22.445598  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.445607  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:22.445613  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:22.445673  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:22.472056  965470 cri.go:89] found id: ""
	I1208 01:26:22.472081  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.472091  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:22.472097  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:22.472155  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:22.501440  965470 cri.go:89] found id: ""
	I1208 01:26:22.501462  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.501471  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:22.501477  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:22.501540  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:22.537154  965470 cri.go:89] found id: ""
	I1208 01:26:22.537176  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.537186  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:22.537192  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:22.537252  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:22.577501  965470 cri.go:89] found id: ""
	I1208 01:26:22.577584  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.577609  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:22.577628  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:22.577725  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:22.604198  965470 cri.go:89] found id: ""
	I1208 01:26:22.604225  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.604234  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:22.604240  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:22.604299  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:22.631460  965470 cri.go:89] found id: ""
	I1208 01:26:22.631540  965470 logs.go:282] 0 containers: []
	W1208 01:26:22.631555  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:22.631564  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:22.631577  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:22.697867  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:22.697903  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:22.716235  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:22.716265  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:22.782948  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:22.782969  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:22.782990  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:22.813272  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:22.813303  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:25.340763  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:25.350676  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:25.350745  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:25.376301  965470 cri.go:89] found id: ""
	I1208 01:26:25.376326  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.376336  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:25.376342  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:25.376404  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:25.405379  965470 cri.go:89] found id: ""
	I1208 01:26:25.405404  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.405413  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:25.405419  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:25.405480  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:25.431194  965470 cri.go:89] found id: ""
	I1208 01:26:25.431219  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.431227  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:25.431234  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:25.431296  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:25.456567  965470 cri.go:89] found id: ""
	I1208 01:26:25.456589  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.456598  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:25.456604  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:25.456662  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:25.481893  965470 cri.go:89] found id: ""
	I1208 01:26:25.481916  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.481931  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:25.481938  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:25.481995  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:25.523687  965470 cri.go:89] found id: ""
	I1208 01:26:25.523709  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.523717  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:25.523724  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:25.523784  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:25.560289  965470 cri.go:89] found id: ""
	I1208 01:26:25.560314  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.560325  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:25.560331  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:25.560392  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:25.592299  965470 cri.go:89] found id: ""
	I1208 01:26:25.592325  965470 logs.go:282] 0 containers: []
	W1208 01:26:25.592335  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:25.592344  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:25.592355  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:25.658300  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:25.658336  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:25.676741  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:25.676771  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:25.740928  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:25.740961  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:25.740989  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:25.771298  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:25.771331  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:28.303797  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:28.313743  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:28.313811  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:28.339129  965470 cri.go:89] found id: ""
	I1208 01:26:28.339154  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.339163  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:28.339171  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:28.339229  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:28.364730  965470 cri.go:89] found id: ""
	I1208 01:26:28.364755  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.364765  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:28.364771  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:28.364827  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:28.394186  965470 cri.go:89] found id: ""
	I1208 01:26:28.394210  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.394219  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:28.394225  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:28.394283  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:28.419539  965470 cri.go:89] found id: ""
	I1208 01:26:28.419564  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.419574  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:28.419580  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:28.419641  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:28.450481  965470 cri.go:89] found id: ""
	I1208 01:26:28.450506  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.450514  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:28.450520  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:28.450581  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:28.475248  965470 cri.go:89] found id: ""
	I1208 01:26:28.475273  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.475282  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:28.475288  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:28.475348  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:28.501220  965470 cri.go:89] found id: ""
	I1208 01:26:28.501245  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.501257  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:28.501263  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:28.501323  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:28.530902  965470 cri.go:89] found id: ""
	I1208 01:26:28.530929  965470 logs.go:282] 0 containers: []
	W1208 01:26:28.530939  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:28.530947  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:28.530961  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:28.560363  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:28.560393  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:28.627824  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:28.627858  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:28.627872  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:28.662078  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:28.662117  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:28.691451  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:28.691479  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:31.259150  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:31.270984  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:31.271052  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:31.302393  965470 cri.go:89] found id: ""
	I1208 01:26:31.302415  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.302424  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:31.302431  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:31.302492  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:31.327159  965470 cri.go:89] found id: ""
	I1208 01:26:31.327181  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.327190  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:31.327196  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:31.327255  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:31.354787  965470 cri.go:89] found id: ""
	I1208 01:26:31.354808  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.354817  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:31.354824  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:31.354913  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:31.384630  965470 cri.go:89] found id: ""
	I1208 01:26:31.384655  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.384664  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:31.384670  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:31.384729  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:31.411992  965470 cri.go:89] found id: ""
	I1208 01:26:31.412014  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.412023  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:31.412028  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:31.412091  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:31.438052  965470 cri.go:89] found id: ""
	I1208 01:26:31.438078  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.438087  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:31.438094  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:31.438155  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:31.464245  965470 cri.go:89] found id: ""
	I1208 01:26:31.464270  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.464279  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:31.464286  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:31.464346  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:31.490195  965470 cri.go:89] found id: ""
	I1208 01:26:31.490221  965470 logs.go:282] 0 containers: []
	W1208 01:26:31.490230  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:31.490239  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:31.490251  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:31.582337  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:31.582373  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:31.607641  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:31.607673  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:31.714649  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:31.714673  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:31.714688  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:31.749949  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:31.749984  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:34.291390  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:34.301995  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:34.302072  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:34.328901  965470 cri.go:89] found id: ""
	I1208 01:26:34.328935  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.328944  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:34.328954  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:34.329021  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:34.354131  965470 cri.go:89] found id: ""
	I1208 01:26:34.354159  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.354169  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:34.354175  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:34.354234  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:34.379697  965470 cri.go:89] found id: ""
	I1208 01:26:34.379719  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.379728  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:34.379734  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:34.379794  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:34.404483  965470 cri.go:89] found id: ""
	I1208 01:26:34.404507  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.404516  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:34.404522  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:34.404579  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:34.428644  965470 cri.go:89] found id: ""
	I1208 01:26:34.428668  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.428677  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:34.428683  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:34.428741  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:34.455443  965470 cri.go:89] found id: ""
	I1208 01:26:34.455468  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.455477  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:34.455483  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:34.455542  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:34.481747  965470 cri.go:89] found id: ""
	I1208 01:26:34.481772  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.481780  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:34.481787  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:34.481846  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:34.510529  965470 cri.go:89] found id: ""
	I1208 01:26:34.510555  965470 logs.go:282] 0 containers: []
	W1208 01:26:34.510564  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:34.510574  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:34.510587  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:34.594984  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:34.595025  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:34.614539  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:34.614569  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:34.681869  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:34.681888  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:34.681902  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:34.717984  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:34.718063  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:37.249304  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:37.259505  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:37.259581  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:37.284803  965470 cri.go:89] found id: ""
	I1208 01:26:37.284827  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.284836  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:37.284848  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:37.284907  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:37.311988  965470 cri.go:89] found id: ""
	I1208 01:26:37.312014  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.312024  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:37.312030  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:37.312088  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:37.337386  965470 cri.go:89] found id: ""
	I1208 01:26:37.337411  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.337420  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:37.337426  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:37.337484  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:37.363082  965470 cri.go:89] found id: ""
	I1208 01:26:37.363105  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.363114  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:37.363121  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:37.363179  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:37.387844  965470 cri.go:89] found id: ""
	I1208 01:26:37.387868  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.387878  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:37.387884  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:37.387972  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:37.412490  965470 cri.go:89] found id: ""
	I1208 01:26:37.412512  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.412520  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:37.412527  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:37.412589  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:37.441863  965470 cri.go:89] found id: ""
	I1208 01:26:37.441890  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.441909  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:37.441916  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:37.441976  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:37.467690  965470 cri.go:89] found id: ""
	I1208 01:26:37.467715  965470 logs.go:282] 0 containers: []
	W1208 01:26:37.467724  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:37.467734  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:37.467746  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:37.498707  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:37.498743  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:37.546520  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:37.546550  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:37.622984  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:37.623023  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:37.641404  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:37.641493  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:37.705152  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:40.205386  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:40.217091  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:40.217174  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:40.242763  965470 cri.go:89] found id: ""
	I1208 01:26:40.242789  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.242798  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:40.242804  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:40.242884  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:40.268732  965470 cri.go:89] found id: ""
	I1208 01:26:40.268757  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.268766  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:40.268773  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:40.268832  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:40.294335  965470 cri.go:89] found id: ""
	I1208 01:26:40.294359  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.294368  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:40.294375  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:40.294435  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:40.318919  965470 cri.go:89] found id: ""
	I1208 01:26:40.318942  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.318951  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:40.318957  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:40.319014  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:40.344112  965470 cri.go:89] found id: ""
	I1208 01:26:40.344135  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.344144  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:40.344151  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:40.344215  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:40.368926  965470 cri.go:89] found id: ""
	I1208 01:26:40.368948  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.368958  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:40.368965  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:40.369025  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:40.394573  965470 cri.go:89] found id: ""
	I1208 01:26:40.394597  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.394606  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:40.394613  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:40.394676  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:40.427548  965470 cri.go:89] found id: ""
	I1208 01:26:40.427571  965470 logs.go:282] 0 containers: []
	W1208 01:26:40.427580  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:40.427589  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:40.427601  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:40.491892  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:40.491912  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:40.491924  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:40.524718  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:40.524755  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:40.557523  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:40.557550  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:40.634780  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:40.634819  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:43.154398  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:43.165050  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:43.165121  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:43.193068  965470 cri.go:89] found id: ""
	I1208 01:26:43.193104  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.193113  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:43.193119  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:43.193181  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:43.217906  965470 cri.go:89] found id: ""
	I1208 01:26:43.217931  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.217942  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:43.217948  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:43.218006  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:43.244789  965470 cri.go:89] found id: ""
	I1208 01:26:43.244814  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.244822  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:43.244829  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:43.244888  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:43.271232  965470 cri.go:89] found id: ""
	I1208 01:26:43.271296  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.271312  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:43.271320  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:43.271378  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:43.299587  965470 cri.go:89] found id: ""
	I1208 01:26:43.299618  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.299628  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:43.299634  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:43.299698  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:43.326070  965470 cri.go:89] found id: ""
	I1208 01:26:43.326097  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.326106  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:43.326112  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:43.326183  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:43.352120  965470 cri.go:89] found id: ""
	I1208 01:26:43.352144  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.352152  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:43.352159  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:43.352215  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:43.377933  965470 cri.go:89] found id: ""
	I1208 01:26:43.377966  965470 logs.go:282] 0 containers: []
	W1208 01:26:43.377975  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:43.377984  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:43.378011  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:43.409057  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:43.409085  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:43.475872  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:43.475909  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:43.495303  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:43.495334  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:43.626583  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:43.626606  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:43.626619  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:46.165391  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:46.175699  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:46.175772  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:46.202556  965470 cri.go:89] found id: ""
	I1208 01:26:46.202583  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.202593  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:46.202599  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:46.202658  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:46.228816  965470 cri.go:89] found id: ""
	I1208 01:26:46.228842  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.228852  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:46.228859  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:46.228919  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:46.256656  965470 cri.go:89] found id: ""
	I1208 01:26:46.256725  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.256750  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:46.256770  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:46.256846  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:46.284118  965470 cri.go:89] found id: ""
	I1208 01:26:46.284144  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.284154  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:46.284160  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:46.284241  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:46.316165  965470 cri.go:89] found id: ""
	I1208 01:26:46.316187  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.316195  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:46.316201  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:46.316262  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:46.341676  965470 cri.go:89] found id: ""
	I1208 01:26:46.341700  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.341710  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:46.341716  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:46.341797  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:46.367637  965470 cri.go:89] found id: ""
	I1208 01:26:46.367708  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.367732  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:46.367750  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:46.367833  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:46.397945  965470 cri.go:89] found id: ""
	I1208 01:26:46.398037  965470 logs.go:282] 0 containers: []
	W1208 01:26:46.398060  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:46.398082  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:46.398116  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:46.468813  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:46.468856  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:46.487198  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:46.487228  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:46.588647  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:46.588682  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:46.588696  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:46.620269  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:46.620310  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:49.148975  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:49.159381  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:49.159467  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:49.184536  965470 cri.go:89] found id: ""
	I1208 01:26:49.184558  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.184566  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:49.184575  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:49.184636  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:49.209938  965470 cri.go:89] found id: ""
	I1208 01:26:49.210006  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.210038  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:49.210060  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:49.210159  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:49.237481  965470 cri.go:89] found id: ""
	I1208 01:26:49.237503  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.237512  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:49.237519  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:49.237576  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:49.262729  965470 cri.go:89] found id: ""
	I1208 01:26:49.262751  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.262759  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:49.262765  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:49.262827  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:49.290320  965470 cri.go:89] found id: ""
	I1208 01:26:49.290341  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.290351  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:49.290357  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:49.290420  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:49.316409  965470 cri.go:89] found id: ""
	I1208 01:26:49.316434  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.316444  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:49.316451  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:49.316508  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:49.352731  965470 cri.go:89] found id: ""
	I1208 01:26:49.352753  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.352761  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:49.352767  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:49.352827  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:49.385119  965470 cri.go:89] found id: ""
	I1208 01:26:49.385141  965470 logs.go:282] 0 containers: []
	W1208 01:26:49.385150  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:49.385159  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:49.385172  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:49.468377  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:49.468395  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:49.468407  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:49.500961  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:49.501580  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:49.549712  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:49.549737  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:49.663508  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:49.663544  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:52.184870  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:52.195360  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:52.195435  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:52.220490  965470 cri.go:89] found id: ""
	I1208 01:26:52.220517  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.220525  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:52.220532  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:52.220591  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:52.247922  965470 cri.go:89] found id: ""
	I1208 01:26:52.247947  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.247956  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:52.247963  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:52.248019  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:52.275332  965470 cri.go:89] found id: ""
	I1208 01:26:52.275356  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.275365  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:52.275372  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:52.275433  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:52.302357  965470 cri.go:89] found id: ""
	I1208 01:26:52.302380  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.302390  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:52.302396  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:52.302452  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:52.328158  965470 cri.go:89] found id: ""
	I1208 01:26:52.328182  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.328191  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:52.328197  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:52.328263  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:52.354650  965470 cri.go:89] found id: ""
	I1208 01:26:52.354676  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.354688  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:52.354695  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:52.354753  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:52.380124  965470 cri.go:89] found id: ""
	I1208 01:26:52.380147  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.380157  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:52.380163  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:52.380222  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:52.405335  965470 cri.go:89] found id: ""
	I1208 01:26:52.405359  965470 logs.go:282] 0 containers: []
	W1208 01:26:52.405368  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:52.405376  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:52.405389  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:52.477466  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:52.477504  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:52.498939  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:52.498969  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:52.585348  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:52.585366  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:52.585379  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:52.616767  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:52.616802  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:55.148688  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:55.159027  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:55.159100  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:55.184954  965470 cri.go:89] found id: ""
	I1208 01:26:55.184978  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.184987  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:55.184993  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:55.185054  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:55.213921  965470 cri.go:89] found id: ""
	I1208 01:26:55.213943  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.213952  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:55.213958  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:55.214021  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:55.239309  965470 cri.go:89] found id: ""
	I1208 01:26:55.239333  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.239344  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:55.239350  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:55.239408  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:55.265154  965470 cri.go:89] found id: ""
	I1208 01:26:55.265179  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.265188  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:55.265195  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:55.265260  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:55.291223  965470 cri.go:89] found id: ""
	I1208 01:26:55.291247  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.291255  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:55.291262  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:55.291320  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:55.320738  965470 cri.go:89] found id: ""
	I1208 01:26:55.320761  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.320770  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:55.320777  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:55.320835  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:55.349425  965470 cri.go:89] found id: ""
	I1208 01:26:55.349446  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.349454  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:55.349460  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:55.349517  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:55.375412  965470 cri.go:89] found id: ""
	I1208 01:26:55.375439  965470 logs.go:282] 0 containers: []
	W1208 01:26:55.375449  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:55.375457  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:55.375468  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:55.442637  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:55.442672  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:55.460571  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:55.460600  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:55.533496  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:55.533517  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:55.533537  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:55.569884  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:55.569922  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:58.106395  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:58.117266  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:58.117341  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:58.142657  965470 cri.go:89] found id: ""
	I1208 01:26:58.142683  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.142692  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:26:58.142699  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:26:58.142758  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:58.169661  965470 cri.go:89] found id: ""
	I1208 01:26:58.169687  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.169696  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:26:58.169702  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:26:58.169763  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:58.195385  965470 cri.go:89] found id: ""
	I1208 01:26:58.195412  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.195421  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:26:58.195428  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:58.195490  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:58.222212  965470 cri.go:89] found id: ""
	I1208 01:26:58.222237  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.222246  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:26:58.222254  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:58.222320  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:58.248860  965470 cri.go:89] found id: ""
	I1208 01:26:58.248883  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.248892  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:58.248899  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:58.248958  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:58.275021  965470 cri.go:89] found id: ""
	I1208 01:26:58.275045  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.275054  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:26:58.275061  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:58.275118  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:58.301843  965470 cri.go:89] found id: ""
	I1208 01:26:58.301867  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.301895  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:58.301902  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:58.301966  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:58.326688  965470 cri.go:89] found id: ""
	I1208 01:26:58.326713  965470 logs.go:282] 0 containers: []
	W1208 01:26:58.326723  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:58.326732  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:26:58.326744  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:26:58.357464  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:26:58.357498  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:58.388206  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:58.388236  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:58.462900  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:58.462949  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:58.481065  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:58.481096  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:58.575943  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:01.076177  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:01.089471  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:01.089551  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:01.131894  965470 cri.go:89] found id: ""
	I1208 01:27:01.131924  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.131934  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:01.131941  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:01.132004  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:01.173359  965470 cri.go:89] found id: ""
	I1208 01:27:01.173387  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.173396  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:01.173402  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:01.173462  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:01.212569  965470 cri.go:89] found id: ""
	I1208 01:27:01.212597  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.212606  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:01.212613  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:01.212674  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:01.242250  965470 cri.go:89] found id: ""
	I1208 01:27:01.242277  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.242286  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:01.242293  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:01.242352  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:01.275100  965470 cri.go:89] found id: ""
	I1208 01:27:01.275123  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.275131  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:01.275138  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:01.275198  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:01.322321  965470 cri.go:89] found id: ""
	I1208 01:27:01.322342  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.322351  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:01.322357  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:01.322414  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:01.353227  965470 cri.go:89] found id: ""
	I1208 01:27:01.353250  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.353259  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:01.353266  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:01.353330  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:01.379849  965470 cri.go:89] found id: ""
	I1208 01:27:01.379878  965470 logs.go:282] 0 containers: []
	W1208 01:27:01.379886  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:01.379895  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:01.379907  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:01.409436  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:01.409460  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:01.478220  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:01.478256  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:01.498501  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:01.498539  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:01.563596  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:01.563619  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:01.563636  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:04.096375  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:04.108006  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:04.108070  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:04.139860  965470 cri.go:89] found id: ""
	I1208 01:27:04.139884  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.139894  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:04.139900  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:04.139961  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:04.182132  965470 cri.go:89] found id: ""
	I1208 01:27:04.182205  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.182228  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:04.182245  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:04.182330  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:04.213642  965470 cri.go:89] found id: ""
	I1208 01:27:04.213662  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.213671  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:04.213677  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:04.213748  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:04.245174  965470 cri.go:89] found id: ""
	I1208 01:27:04.245196  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.245205  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:04.245212  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:04.245269  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:04.279685  965470 cri.go:89] found id: ""
	I1208 01:27:04.279705  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.279714  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:04.279720  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:04.279780  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:04.317495  965470 cri.go:89] found id: ""
	I1208 01:27:04.317516  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.317524  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:04.317531  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:04.317590  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:04.359672  965470 cri.go:89] found id: ""
	I1208 01:27:04.359694  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.359702  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:04.359708  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:04.359766  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:04.399247  965470 cri.go:89] found id: ""
	I1208 01:27:04.399320  965470 logs.go:282] 0 containers: []
	W1208 01:27:04.399343  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:04.399365  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:04.399409  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:04.431842  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:04.431882  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:04.470724  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:04.470755  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:04.556447  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:04.556480  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:04.575875  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:04.575946  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:04.669604  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:07.170509  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:07.180477  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:07.180546  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:07.209652  965470 cri.go:89] found id: ""
	I1208 01:27:07.209678  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.209687  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:07.209693  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:07.209750  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:07.238048  965470 cri.go:89] found id: ""
	I1208 01:27:07.238075  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.238083  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:07.238090  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:07.238148  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:07.263596  965470 cri.go:89] found id: ""
	I1208 01:27:07.263627  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.263636  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:07.263643  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:07.263704  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:07.295958  965470 cri.go:89] found id: ""
	I1208 01:27:07.295983  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.295992  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:07.295998  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:07.296066  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:07.321691  965470 cri.go:89] found id: ""
	I1208 01:27:07.321717  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.321726  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:07.321732  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:07.321792  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:07.350034  965470 cri.go:89] found id: ""
	I1208 01:27:07.350056  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.350065  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:07.350071  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:07.350128  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:07.377781  965470 cri.go:89] found id: ""
	I1208 01:27:07.377802  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.377810  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:07.377816  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:07.377877  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:07.407343  965470 cri.go:89] found id: ""
	I1208 01:27:07.407365  965470 logs.go:282] 0 containers: []
	W1208 01:27:07.407374  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:07.407382  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:07.407394  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:07.488069  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:07.488090  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:07.488103  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:07.527438  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:07.527471  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:07.573421  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:07.573450  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:07.652052  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:07.652083  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:10.176820  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:10.187018  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:10.187085  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:10.214038  965470 cri.go:89] found id: ""
	I1208 01:27:10.214062  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.214073  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:10.214080  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:10.214138  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:10.239000  965470 cri.go:89] found id: ""
	I1208 01:27:10.239028  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.239037  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:10.239042  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:10.239099  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:10.269729  965470 cri.go:89] found id: ""
	I1208 01:27:10.269753  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.269763  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:10.269769  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:10.269832  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:10.300503  965470 cri.go:89] found id: ""
	I1208 01:27:10.300527  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.300535  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:10.300542  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:10.300599  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:10.325663  965470 cri.go:89] found id: ""
	I1208 01:27:10.325684  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.325692  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:10.325699  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:10.325757  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:10.351101  965470 cri.go:89] found id: ""
	I1208 01:27:10.351122  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.351130  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:10.351137  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:10.351196  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:10.377231  965470 cri.go:89] found id: ""
	I1208 01:27:10.377258  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.377267  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:10.377273  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:10.377337  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:10.402522  965470 cri.go:89] found id: ""
	I1208 01:27:10.402548  965470 logs.go:282] 0 containers: []
	W1208 01:27:10.402557  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:10.402566  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:10.402577  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:10.469449  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:10.469484  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:10.487803  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:10.487839  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:10.556768  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:10.556791  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:10.556804  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:10.586963  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:10.586996  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:13.119433  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:13.129710  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:13.129784  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:13.166248  965470 cri.go:89] found id: ""
	I1208 01:27:13.166275  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.166284  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:13.166291  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:13.166348  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:13.191975  965470 cri.go:89] found id: ""
	I1208 01:27:13.191997  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.192005  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:13.192012  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:13.192071  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:13.217680  965470 cri.go:89] found id: ""
	I1208 01:27:13.217706  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.217714  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:13.217721  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:13.217781  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:13.243313  965470 cri.go:89] found id: ""
	I1208 01:27:13.243339  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.243348  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:13.243355  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:13.243415  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:13.267715  965470 cri.go:89] found id: ""
	I1208 01:27:13.267741  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.267750  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:13.267756  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:13.267814  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:13.293844  965470 cri.go:89] found id: ""
	I1208 01:27:13.293869  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.293879  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:13.293886  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:13.293947  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:13.320269  965470 cri.go:89] found id: ""
	I1208 01:27:13.320294  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.320303  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:13.320310  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:13.320370  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:13.346097  965470 cri.go:89] found id: ""
	I1208 01:27:13.346124  965470 logs.go:282] 0 containers: []
	W1208 01:27:13.346133  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:13.346142  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:13.346153  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:13.381215  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:13.381286  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:13.448524  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:13.448561  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:13.468416  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:13.468447  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:13.533882  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:13.533903  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:13.533916  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:16.074978  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:16.085266  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:16.085343  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:16.115268  965470 cri.go:89] found id: ""
	I1208 01:27:16.115290  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.115299  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:16.115305  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:16.115365  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:16.147853  965470 cri.go:89] found id: ""
	I1208 01:27:16.147879  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.147888  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:16.147894  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:16.147953  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:16.172642  965470 cri.go:89] found id: ""
	I1208 01:27:16.172667  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.172676  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:16.172682  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:16.172743  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:16.198597  965470 cri.go:89] found id: ""
	I1208 01:27:16.198620  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.198628  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:16.198635  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:16.198690  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:16.223660  965470 cri.go:89] found id: ""
	I1208 01:27:16.223687  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.223695  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:16.223703  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:16.223761  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:16.249088  965470 cri.go:89] found id: ""
	I1208 01:27:16.249111  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.249119  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:16.249128  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:16.249187  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:16.279353  965470 cri.go:89] found id: ""
	I1208 01:27:16.279375  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.279384  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:16.279390  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:16.279448  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:16.313651  965470 cri.go:89] found id: ""
	I1208 01:27:16.313676  965470 logs.go:282] 0 containers: []
	W1208 01:27:16.313685  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:16.313694  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:16.313707  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:16.345018  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:16.345054  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:16.374437  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:16.374466  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:16.446283  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:16.446317  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:16.465100  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:16.465127  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:16.541921  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:19.043139  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:19.054072  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:19.054144  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:19.089771  965470 cri.go:89] found id: ""
	I1208 01:27:19.089799  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.089809  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:19.089816  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:19.089879  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:19.114659  965470 cri.go:89] found id: ""
	I1208 01:27:19.114684  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.114693  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:19.114699  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:19.114762  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:19.141416  965470 cri.go:89] found id: ""
	I1208 01:27:19.141441  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.141450  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:19.141456  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:19.141514  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:19.167672  965470 cri.go:89] found id: ""
	I1208 01:27:19.167695  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.167703  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:19.167710  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:19.167770  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:19.198164  965470 cri.go:89] found id: ""
	I1208 01:27:19.198186  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.198196  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:19.198203  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:19.198261  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:19.224715  965470 cri.go:89] found id: ""
	I1208 01:27:19.224740  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.224749  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:19.224758  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:19.224819  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:19.250758  965470 cri.go:89] found id: ""
	I1208 01:27:19.250783  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.250793  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:19.250799  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:19.250884  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:19.276907  965470 cri.go:89] found id: ""
	I1208 01:27:19.276986  965470 logs.go:282] 0 containers: []
	W1208 01:27:19.277008  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:19.277024  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:19.277048  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:19.307817  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:19.307857  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:19.338999  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:19.339028  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:19.406156  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:19.406192  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:19.427706  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:19.427743  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:19.495479  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:21.996030  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:22.008761  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:22.008831  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:22.099892  965470 cri.go:89] found id: ""
	I1208 01:27:22.099915  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.099924  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:22.099931  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:22.100049  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:22.127049  965470 cri.go:89] found id: ""
	I1208 01:27:22.127076  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.127084  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:22.127090  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:22.127153  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:22.153499  965470 cri.go:89] found id: ""
	I1208 01:27:22.153525  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.153534  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:22.153541  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:22.153598  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:22.183856  965470 cri.go:89] found id: ""
	I1208 01:27:22.183887  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.183896  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:22.183903  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:22.183959  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:22.214575  965470 cri.go:89] found id: ""
	I1208 01:27:22.214601  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.214610  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:22.214616  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:22.214678  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:22.240550  965470 cri.go:89] found id: ""
	I1208 01:27:22.240623  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.240639  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:22.240647  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:22.240707  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:22.269948  965470 cri.go:89] found id: ""
	I1208 01:27:22.269973  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.269989  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:22.269996  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:22.270059  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:22.295729  965470 cri.go:89] found id: ""
	I1208 01:27:22.295754  965470 logs.go:282] 0 containers: []
	W1208 01:27:22.295763  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:22.295772  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:22.295802  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:22.326975  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:22.327011  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:22.358514  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:22.358543  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:22.427708  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:22.427747  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:22.445470  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:22.445500  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:22.510243  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:25.010949  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:25.044195  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:25.044276  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:25.095880  965470 cri.go:89] found id: ""
	I1208 01:27:25.095925  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.095935  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:25.095943  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:25.096022  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:25.142501  965470 cri.go:89] found id: ""
	I1208 01:27:25.142523  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.142532  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:25.142538  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:25.142597  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:25.180621  965470 cri.go:89] found id: ""
	I1208 01:27:25.180642  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.180650  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:25.180657  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:25.180711  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:25.209359  965470 cri.go:89] found id: ""
	I1208 01:27:25.209380  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.209389  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:25.209395  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:25.209451  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:25.253414  965470 cri.go:89] found id: ""
	I1208 01:27:25.253435  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.253444  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:25.253450  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:25.253511  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:25.289141  965470 cri.go:89] found id: ""
	I1208 01:27:25.289161  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.289170  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:25.289176  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:25.289236  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:25.330821  965470 cri.go:89] found id: ""
	I1208 01:27:25.330863  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.330872  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:25.330878  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:25.330939  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:25.362863  965470 cri.go:89] found id: ""
	I1208 01:27:25.362885  965470 logs.go:282] 0 containers: []
	W1208 01:27:25.362894  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:25.362903  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:25.362914  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:25.454059  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:25.454158  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:25.480897  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:25.481087  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:25.578714  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:25.578776  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:25.578805  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:25.618928  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:25.618967  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:28.151026  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:28.161497  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:28.161564  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:28.190442  965470 cri.go:89] found id: ""
	I1208 01:27:28.190466  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.190476  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:28.190483  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:28.190544  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:28.233876  965470 cri.go:89] found id: ""
	I1208 01:27:28.233908  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.233917  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:28.233924  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:28.233999  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:28.271359  965470 cri.go:89] found id: ""
	I1208 01:27:28.271382  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.271391  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:28.271397  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:28.271465  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:28.306334  965470 cri.go:89] found id: ""
	I1208 01:27:28.306355  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.306364  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:28.306370  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:28.306430  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:28.342781  965470 cri.go:89] found id: ""
	I1208 01:27:28.342804  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.342813  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:28.342819  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:28.342955  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:28.371498  965470 cri.go:89] found id: ""
	I1208 01:27:28.371520  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.371528  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:28.371534  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:28.371599  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:28.406588  965470 cri.go:89] found id: ""
	I1208 01:27:28.406609  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.406617  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:28.406623  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:28.406685  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:28.449371  965470 cri.go:89] found id: ""
	I1208 01:27:28.449395  965470 logs.go:282] 0 containers: []
	W1208 01:27:28.449403  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:28.449412  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:28.449424  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:28.492479  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:28.492506  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:28.575607  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:28.575653  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:28.600691  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:28.600733  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:28.694524  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:28.694547  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:28.694560  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:31.233911  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:31.244057  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:31.244130  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:31.268529  965470 cri.go:89] found id: ""
	I1208 01:27:31.268552  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.268560  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:31.268567  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:31.268625  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:31.294134  965470 cri.go:89] found id: ""
	I1208 01:27:31.294160  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.294170  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:31.294176  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:31.294237  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:31.319008  965470 cri.go:89] found id: ""
	I1208 01:27:31.319031  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.319040  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:31.319047  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:31.319105  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:31.344944  965470 cri.go:89] found id: ""
	I1208 01:27:31.344966  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.344975  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:31.344982  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:31.345039  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:31.371308  965470 cri.go:89] found id: ""
	I1208 01:27:31.371334  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.371343  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:31.371349  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:31.371412  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:31.397237  965470 cri.go:89] found id: ""
	I1208 01:27:31.397263  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.397272  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:31.397279  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:31.397338  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:31.423395  965470 cri.go:89] found id: ""
	I1208 01:27:31.423422  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.423430  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:31.423436  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:31.423496  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:31.450634  965470 cri.go:89] found id: ""
	I1208 01:27:31.450657  965470 logs.go:282] 0 containers: []
	W1208 01:27:31.450666  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:31.450675  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:31.450693  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:31.518197  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:31.518232  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:31.536485  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:31.536518  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:31.602434  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:31.602456  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:31.602468  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:31.633760  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:31.633794  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:34.162966  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:34.173295  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:34.173361  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:34.202184  965470 cri.go:89] found id: ""
	I1208 01:27:34.202206  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.202214  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:34.202220  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:34.202280  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:34.228880  965470 cri.go:89] found id: ""
	I1208 01:27:34.228908  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.228917  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:34.228924  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:34.228984  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:34.253935  965470 cri.go:89] found id: ""
	I1208 01:27:34.253961  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.253990  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:34.253999  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:34.254062  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:34.280802  965470 cri.go:89] found id: ""
	I1208 01:27:34.280826  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.280835  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:34.280843  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:34.280902  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:34.306687  965470 cri.go:89] found id: ""
	I1208 01:27:34.306713  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.306722  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:34.306728  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:34.306792  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:34.333523  965470 cri.go:89] found id: ""
	I1208 01:27:34.333546  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.333555  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:34.333562  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:34.333620  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:34.359485  965470 cri.go:89] found id: ""
	I1208 01:27:34.359508  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.359517  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:34.359523  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:34.359585  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:34.384877  965470 cri.go:89] found id: ""
	I1208 01:27:34.384900  965470 logs.go:282] 0 containers: []
	W1208 01:27:34.384909  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:34.384918  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:34.384931  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:34.403162  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:34.403188  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:34.482587  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:34.482607  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:34.482619  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:34.513407  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:34.513440  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:34.544781  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:34.544809  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:37.115819  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:37.125884  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:37.125954  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:37.154857  965470 cri.go:89] found id: ""
	I1208 01:27:37.154878  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.154886  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:37.154892  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:37.154949  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:37.180166  965470 cri.go:89] found id: ""
	I1208 01:27:37.180190  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.180199  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:37.180205  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:37.180262  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:37.207516  965470 cri.go:89] found id: ""
	I1208 01:27:37.207542  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.207550  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:37.207557  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:37.207615  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:37.232683  965470 cri.go:89] found id: ""
	I1208 01:27:37.232708  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.232717  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:37.232723  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:37.232780  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:37.261166  965470 cri.go:89] found id: ""
	I1208 01:27:37.261190  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.261203  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:37.261218  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:37.261286  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:37.285980  965470 cri.go:89] found id: ""
	I1208 01:27:37.286006  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.286016  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:37.286022  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:37.286093  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:37.311743  965470 cri.go:89] found id: ""
	I1208 01:27:37.311768  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.311777  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:37.311783  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:37.311844  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:37.336681  965470 cri.go:89] found id: ""
	I1208 01:27:37.336706  965470 logs.go:282] 0 containers: []
	W1208 01:27:37.336715  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:37.336724  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:37.336754  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:37.400342  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:37.400361  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:37.400373  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:37.431379  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:37.431413  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:37.466032  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:37.466058  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:37.533447  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:37.533480  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:40.054981  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:40.066239  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:40.066318  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:40.094691  965470 cri.go:89] found id: ""
	I1208 01:27:40.094717  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.094725  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:40.094732  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:40.094795  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:40.122478  965470 cri.go:89] found id: ""
	I1208 01:27:40.122504  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.122513  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:40.122520  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:40.122580  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:40.148764  965470 cri.go:89] found id: ""
	I1208 01:27:40.148792  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.148801  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:40.148807  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:40.148865  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:40.181494  965470 cri.go:89] found id: ""
	I1208 01:27:40.181519  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.181528  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:40.181534  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:40.181604  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:40.207748  965470 cri.go:89] found id: ""
	I1208 01:27:40.207769  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.207777  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:40.207783  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:40.207850  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:40.233556  965470 cri.go:89] found id: ""
	I1208 01:27:40.233577  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.233586  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:40.233592  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:40.233649  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:40.260336  965470 cri.go:89] found id: ""
	I1208 01:27:40.260361  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.260370  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:40.260377  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:40.260437  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:40.285473  965470 cri.go:89] found id: ""
	I1208 01:27:40.285501  965470 logs.go:282] 0 containers: []
	W1208 01:27:40.285510  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:40.285519  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:40.285531  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:40.353026  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:40.353065  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:40.371817  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:40.371855  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:40.437444  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:40.437462  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:40.437475  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:40.468167  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:40.468204  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:42.995723  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:43.012184  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:43.012259  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:43.090720  965470 cri.go:89] found id: ""
	I1208 01:27:43.090741  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.090750  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:43.090756  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:43.090815  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:43.159432  965470 cri.go:89] found id: ""
	I1208 01:27:43.159452  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.159460  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:43.159469  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:43.159531  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:43.193044  965470 cri.go:89] found id: ""
	I1208 01:27:43.193065  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.193073  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:43.193079  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:43.193141  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:43.224997  965470 cri.go:89] found id: ""
	I1208 01:27:43.225018  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.225026  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:43.225038  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:43.225097  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:43.256990  965470 cri.go:89] found id: ""
	I1208 01:27:43.257012  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.257020  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:43.257026  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:43.257082  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:43.293056  965470 cri.go:89] found id: ""
	I1208 01:27:43.293078  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.293087  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:43.293093  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:43.293153  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:43.323747  965470 cri.go:89] found id: ""
	I1208 01:27:43.323818  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.323842  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:43.323859  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:43.323950  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:43.357153  965470 cri.go:89] found id: ""
	I1208 01:27:43.357226  965470 logs.go:282] 0 containers: []
	W1208 01:27:43.357247  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:43.357271  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:43.357320  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:43.450518  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:43.450590  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:43.450617  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:43.485011  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:43.485047  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:43.531828  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:43.531912  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:43.621478  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:43.621576  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:46.146483  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:46.156513  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:46.156583  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:46.181976  965470 cri.go:89] found id: ""
	I1208 01:27:46.182001  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.182010  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:46.182017  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:46.182078  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:46.213318  965470 cri.go:89] found id: ""
	I1208 01:27:46.213344  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.213359  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:46.213366  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:46.213437  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:46.239103  965470 cri.go:89] found id: ""
	I1208 01:27:46.239132  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.239142  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:46.239148  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:46.239208  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:46.269641  965470 cri.go:89] found id: ""
	I1208 01:27:46.269666  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.269675  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:46.269682  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:46.269741  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:46.295113  965470 cri.go:89] found id: ""
	I1208 01:27:46.295139  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.295148  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:46.295155  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:46.295233  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:46.321462  965470 cri.go:89] found id: ""
	I1208 01:27:46.321505  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.321515  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:46.321522  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:46.321621  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:46.352655  965470 cri.go:89] found id: ""
	I1208 01:27:46.352678  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.352688  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:46.352694  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:46.352781  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:46.381572  965470 cri.go:89] found id: ""
	I1208 01:27:46.381649  965470 logs.go:282] 0 containers: []
	W1208 01:27:46.381673  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:46.381696  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:46.381732  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:46.448776  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:46.448813  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:46.467144  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:46.467299  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:46.533312  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:46.533345  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:46.533364  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:46.567069  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:46.567104  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:49.097420  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:49.107829  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:49.107900  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:49.133432  965470 cri.go:89] found id: ""
	I1208 01:27:49.133459  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.133468  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:49.133475  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:49.133535  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:49.159503  965470 cri.go:89] found id: ""
	I1208 01:27:49.159528  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.159537  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:49.159544  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:49.159605  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:49.185768  965470 cri.go:89] found id: ""
	I1208 01:27:49.185792  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.185801  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:49.185819  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:49.185878  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:49.214036  965470 cri.go:89] found id: ""
	I1208 01:27:49.214078  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.214087  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:49.214094  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:49.214160  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:49.239530  965470 cri.go:89] found id: ""
	I1208 01:27:49.239608  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.239624  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:49.239635  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:49.239702  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:49.270444  965470 cri.go:89] found id: ""
	I1208 01:27:49.270468  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.270477  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:49.270484  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:49.270548  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:49.296650  965470 cri.go:89] found id: ""
	I1208 01:27:49.296675  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.296684  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:49.296691  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:49.296777  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:49.323213  965470 cri.go:89] found id: ""
	I1208 01:27:49.323237  965470 logs.go:282] 0 containers: []
	W1208 01:27:49.323246  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:49.323255  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:49.323267  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:49.385035  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:49.385055  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:49.385068  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:49.416069  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:49.416101  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:49.446656  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:49.446688  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:49.516153  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:49.516192  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:52.034602  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:52.047091  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:52.047162  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:52.077105  965470 cri.go:89] found id: ""
	I1208 01:27:52.077126  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.077136  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:52.077143  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:52.077206  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:52.107205  965470 cri.go:89] found id: ""
	I1208 01:27:52.107228  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.107237  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:52.107243  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:52.107316  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:52.139023  965470 cri.go:89] found id: ""
	I1208 01:27:52.139049  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.139059  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:52.139065  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:52.139127  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:52.165406  965470 cri.go:89] found id: ""
	I1208 01:27:52.165431  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.165440  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:52.165447  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:52.165514  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:52.195517  965470 cri.go:89] found id: ""
	I1208 01:27:52.195543  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.195552  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:52.195558  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:52.195618  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:52.221083  965470 cri.go:89] found id: ""
	I1208 01:27:52.221107  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.221116  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:52.221122  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:52.221181  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:52.248643  965470 cri.go:89] found id: ""
	I1208 01:27:52.248667  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.248676  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:52.248683  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:52.248746  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:52.282924  965470 cri.go:89] found id: ""
	I1208 01:27:52.282951  965470 logs.go:282] 0 containers: []
	W1208 01:27:52.282960  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:52.282969  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:52.282981  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:52.315350  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:52.315381  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:52.346784  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:52.346812  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:52.416128  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:52.416169  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:52.434306  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:52.434338  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:52.503307  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:55.003544  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:55.032649  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:55.032717  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:55.108290  965470 cri.go:89] found id: ""
	I1208 01:27:55.108319  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.108328  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:55.108335  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:55.108395  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:55.147692  965470 cri.go:89] found id: ""
	I1208 01:27:55.147714  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.147725  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:55.147733  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:55.147821  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:55.180158  965470 cri.go:89] found id: ""
	I1208 01:27:55.180180  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.180189  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:55.180195  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:55.180254  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:55.217161  965470 cri.go:89] found id: ""
	I1208 01:27:55.217183  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.217192  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:55.217198  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:55.217255  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:55.254668  965470 cri.go:89] found id: ""
	I1208 01:27:55.254689  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.254697  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:55.254703  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:55.254759  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:55.291323  965470 cri.go:89] found id: ""
	I1208 01:27:55.291345  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.291353  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:55.291360  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:55.291416  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:55.316770  965470 cri.go:89] found id: ""
	I1208 01:27:55.316791  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.316800  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:55.316806  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:55.316865  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:55.342620  965470 cri.go:89] found id: ""
	I1208 01:27:55.342641  965470 logs.go:282] 0 containers: []
	W1208 01:27:55.342649  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:55.342659  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:55.342670  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:55.412905  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:55.412941  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:55.433627  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:55.433664  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:55.497724  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:55.497743  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:55.497755  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:55.528834  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:55.528870  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:58.059647  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:58.070137  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:58.070213  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:58.098738  965470 cri.go:89] found id: ""
	I1208 01:27:58.098762  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.098771  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:27:58.098778  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:27:58.098867  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:58.127273  965470 cri.go:89] found id: ""
	I1208 01:27:58.127297  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.127306  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:27:58.127313  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:27:58.127376  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:58.153331  965470 cri.go:89] found id: ""
	I1208 01:27:58.153355  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.153363  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:27:58.153369  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:58.153425  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:58.182522  965470 cri.go:89] found id: ""
	I1208 01:27:58.182546  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.182555  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:27:58.182561  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:58.182619  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:58.206861  965470 cri.go:89] found id: ""
	I1208 01:27:58.206884  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.206892  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:58.206899  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:58.206957  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:58.235871  965470 cri.go:89] found id: ""
	I1208 01:27:58.235898  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.235906  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:27:58.235913  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:58.235972  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:58.260249  965470 cri.go:89] found id: ""
	I1208 01:27:58.260324  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.260340  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:58.260348  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:58.260412  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:58.285596  965470 cri.go:89] found id: ""
	I1208 01:27:58.285621  965470 logs.go:282] 0 containers: []
	W1208 01:27:58.285629  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:58.285638  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:58.285651  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:58.303291  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:58.303322  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:58.372278  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:58.372299  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:27:58.372311  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:27:58.407942  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:27:58.407978  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:58.437521  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:58.437547  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:01.005345  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:01.016434  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:28:01.016506  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:28:01.044413  965470 cri.go:89] found id: ""
	I1208 01:28:01.044442  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.044452  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:28:01.044459  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:28:01.044523  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:28:01.076134  965470 cri.go:89] found id: ""
	I1208 01:28:01.076157  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.076165  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:28:01.076172  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:28:01.076231  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:28:01.103736  965470 cri.go:89] found id: ""
	I1208 01:28:01.103761  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.103771  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:28:01.103777  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:28:01.103838  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:28:01.141413  965470 cri.go:89] found id: ""
	I1208 01:28:01.141442  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.141451  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:28:01.141457  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:28:01.141517  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:28:01.173373  965470 cri.go:89] found id: ""
	I1208 01:28:01.173401  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.173411  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:28:01.173439  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:28:01.173513  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:28:01.219021  965470 cri.go:89] found id: ""
	I1208 01:28:01.219047  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.219056  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:28:01.219063  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:28:01.219122  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:28:01.261929  965470 cri.go:89] found id: ""
	I1208 01:28:01.261978  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.261989  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:28:01.261996  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:28:01.262063  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:28:01.348417  965470 cri.go:89] found id: ""
	I1208 01:28:01.348440  965470 logs.go:282] 0 containers: []
	W1208 01:28:01.348448  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:28:01.348457  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:28:01.348469  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:28:01.367787  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:28:01.367874  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:28:01.480013  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:28:01.480035  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:28:01.480048  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:28:01.512199  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:28:01.512273  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:28:01.561784  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:28:01.561815  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:04.148435  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:04.159641  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:28:04.159715  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:28:04.186578  965470 cri.go:89] found id: ""
	I1208 01:28:04.186608  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.186617  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:28:04.186624  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:28:04.186686  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:28:04.212154  965470 cri.go:89] found id: ""
	I1208 01:28:04.212182  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.212191  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:28:04.212197  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:28:04.212255  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:28:04.239062  965470 cri.go:89] found id: ""
	I1208 01:28:04.239096  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.239106  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:28:04.239113  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:28:04.239173  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:28:04.264948  965470 cri.go:89] found id: ""
	I1208 01:28:04.264969  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.264978  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:28:04.264985  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:28:04.265043  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:28:04.292713  965470 cri.go:89] found id: ""
	I1208 01:28:04.292738  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.292747  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:28:04.292753  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:28:04.292809  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:28:04.331322  965470 cri.go:89] found id: ""
	I1208 01:28:04.331349  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.331358  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:28:04.331364  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:28:04.331422  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:28:04.361994  965470 cri.go:89] found id: ""
	I1208 01:28:04.362019  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.362029  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:28:04.362034  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:28:04.362094  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:28:04.395757  965470 cri.go:89] found id: ""
	I1208 01:28:04.395781  965470 logs.go:282] 0 containers: []
	W1208 01:28:04.395789  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:28:04.395803  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:28:04.395815  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:28:04.432859  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:28:04.432911  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:28:04.481237  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:28:04.481266  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:04.576905  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:28:04.576989  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:28:04.621149  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:28:04.621182  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:28:04.709542  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:28:07.209765  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:07.220145  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:28:07.220220  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:28:07.249837  965470 cri.go:89] found id: ""
	I1208 01:28:07.249858  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.249867  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:28:07.249873  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:28:07.249931  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:28:07.277638  965470 cri.go:89] found id: ""
	I1208 01:28:07.277660  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.277668  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:28:07.277675  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:28:07.277741  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:28:07.303638  965470 cri.go:89] found id: ""
	I1208 01:28:07.303662  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.303671  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:28:07.303678  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:28:07.303743  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:28:07.329640  965470 cri.go:89] found id: ""
	I1208 01:28:07.329666  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.329674  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:28:07.329681  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:28:07.329743  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:28:07.361207  965470 cri.go:89] found id: ""
	I1208 01:28:07.361227  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.361236  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:28:07.361243  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:28:07.361307  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:28:07.386241  965470 cri.go:89] found id: ""
	I1208 01:28:07.386266  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.386275  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:28:07.386282  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:28:07.386342  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:28:07.411505  965470 cri.go:89] found id: ""
	I1208 01:28:07.411530  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.411539  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:28:07.411559  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:28:07.411617  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:28:07.437054  965470 cri.go:89] found id: ""
	I1208 01:28:07.437081  965470 logs.go:282] 0 containers: []
	W1208 01:28:07.437089  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:28:07.437099  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:28:07.437110  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:07.504835  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:28:07.504922  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:28:07.525423  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:28:07.525506  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:28:07.601187  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:28:07.601206  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:28:07.601218  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:28:07.640249  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:28:07.640286  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:28:10.173494  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:10.183891  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:28:10.183969  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:28:10.208700  965470 cri.go:89] found id: ""
	I1208 01:28:10.208726  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.208735  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:28:10.208741  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:28:10.208801  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:28:10.233551  965470 cri.go:89] found id: ""
	I1208 01:28:10.233577  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.233587  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:28:10.233593  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:28:10.233653  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:28:10.261670  965470 cri.go:89] found id: ""
	I1208 01:28:10.261697  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.261706  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:28:10.261714  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:28:10.261771  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:28:10.287865  965470 cri.go:89] found id: ""
	I1208 01:28:10.287892  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.287901  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:28:10.287908  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:28:10.287983  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:28:10.313242  965470 cri.go:89] found id: ""
	I1208 01:28:10.313266  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.313277  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:28:10.313283  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:28:10.313339  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:28:10.338449  965470 cri.go:89] found id: ""
	I1208 01:28:10.338474  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.338483  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:28:10.338490  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:28:10.338547  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:28:10.362998  965470 cri.go:89] found id: ""
	I1208 01:28:10.363024  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.363033  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:28:10.363039  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:28:10.363098  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:28:10.388425  965470 cri.go:89] found id: ""
	I1208 01:28:10.388448  965470 logs.go:282] 0 containers: []
	W1208 01:28:10.388457  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:28:10.388466  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:28:10.388478  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:28:10.453998  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:28:10.454017  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:28:10.454031  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:28:10.484575  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:28:10.484617  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:28:10.517304  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:28:10.517331  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:10.610023  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:28:10.610073  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:28:13.130752  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:13.141403  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:28:13.141468  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:28:13.169714  965470 cri.go:89] found id: ""
	I1208 01:28:13.169734  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.169742  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:28:13.169748  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:28:13.169804  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:28:13.198230  965470 cri.go:89] found id: ""
	I1208 01:28:13.198251  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.198260  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:28:13.198266  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:28:13.198326  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:28:13.226432  965470 cri.go:89] found id: ""
	I1208 01:28:13.226454  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.226462  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:28:13.226468  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:28:13.226532  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:28:13.253126  965470 cri.go:89] found id: ""
	I1208 01:28:13.253147  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.253155  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:28:13.253161  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:28:13.253236  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:28:13.278156  965470 cri.go:89] found id: ""
	I1208 01:28:13.278179  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.278187  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:28:13.278194  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:28:13.278254  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:28:13.303491  965470 cri.go:89] found id: ""
	I1208 01:28:13.303528  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.303538  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:28:13.303545  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:28:13.303604  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:28:13.330105  965470 cri.go:89] found id: ""
	I1208 01:28:13.330127  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.330147  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:28:13.330153  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:28:13.330217  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:28:13.355489  965470 cri.go:89] found id: ""
	I1208 01:28:13.355518  965470 logs.go:282] 0 containers: []
	W1208 01:28:13.355527  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:28:13.355535  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:28:13.355547  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:28:13.385444  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:28:13.385472  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:28:13.453504  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:28:13.453545  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:28:13.472785  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:28:13.472817  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:28:13.569096  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:28:13.569120  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:28:13.569134  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:28:16.102580  965470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:28:16.112464  965470 kubeadm.go:602] duration metric: took 4m4.982863916s to restartPrimaryControlPlane
	W1208 01:28:16.112529  965470 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 01:28:16.112595  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 01:28:16.524999  965470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:28:16.543466  965470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:28:16.552153  965470 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:28:16.552214  965470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:28:16.560022  965470 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:28:16.560042  965470 kubeadm.go:158] found existing configuration files:
	
	I1208 01:28:16.560095  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:28:16.568188  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:28:16.568254  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:28:16.575982  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:28:16.584019  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:28:16.584085  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:28:16.592519  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:28:16.600368  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:28:16.600435  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:28:16.607800  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:28:16.615597  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:28:16.615661  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:28:16.622959  965470 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:28:16.661420  965470 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:28:16.661750  965470 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:28:16.745724  965470 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:28:16.745799  965470 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:28:16.745864  965470 kubeadm.go:319] OS: Linux
	I1208 01:28:16.745983  965470 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:28:16.746041  965470 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:28:16.746090  965470 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:28:16.746138  965470 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:28:16.746187  965470 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:28:16.746234  965470 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:28:16.746279  965470 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:28:16.746327  965470 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:28:16.746372  965470 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:28:16.806958  965470 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:28:16.807072  965470 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:28:16.807182  965470 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:28:16.815736  965470 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:28:16.819172  965470 out.go:252]   - Generating certificates and keys ...
	I1208 01:28:16.819279  965470 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:28:16.819361  965470 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:28:16.819455  965470 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:28:16.819543  965470 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:28:16.819617  965470 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:28:16.819688  965470 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:28:16.819769  965470 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:28:16.819841  965470 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:28:16.819926  965470 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:28:16.820018  965470 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:28:16.820217  965470 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:28:16.820295  965470 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:28:17.156044  965470 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:28:17.259968  965470 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:28:17.637900  965470 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:28:17.866701  965470 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:28:18.353605  965470 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:28:18.354921  965470 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:28:18.357795  965470 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:28:18.361520  965470 out.go:252]   - Booting up control plane ...
	I1208 01:28:18.361662  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:28:18.361800  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:28:18.362635  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:28:18.395359  965470 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:28:18.395513  965470 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:28:18.405463  965470 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:28:18.406222  965470 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:28:18.406580  965470 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:28:18.568485  965470 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:28:18.568617  965470 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:32:18.569293  965470 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001207053s
	I1208 01:32:18.569343  965470 kubeadm.go:319] 
	I1208 01:32:18.569420  965470 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:32:18.569460  965470 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:32:18.569559  965470 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:32:18.569572  965470 kubeadm.go:319] 
	I1208 01:32:18.569670  965470 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:32:18.569704  965470 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:32:18.569740  965470 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:32:18.569748  965470 kubeadm.go:319] 
	I1208 01:32:18.574230  965470 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:32:18.574658  965470 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:32:18.574771  965470 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:32:18.575086  965470 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:32:18.575098  965470 kubeadm.go:319] 
	I1208 01:32:18.575167  965470 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:32:18.575293  965470 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207053s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001207053s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:32:18.575378  965470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 01:32:18.985332  965470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:32:19.005702  965470 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:32:19.005791  965470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:32:19.014084  965470 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:32:19.014107  965470 kubeadm.go:158] found existing configuration files:
	
	I1208 01:32:19.014161  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:32:19.021942  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:32:19.022015  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:32:19.029597  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:32:19.037562  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:32:19.037653  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:32:19.045117  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:32:19.052733  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:32:19.052828  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:32:19.067886  965470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:32:19.075678  965470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:32:19.075761  965470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:32:19.083530  965470 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:32:19.123077  965470 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:32:19.123420  965470 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:32:19.197141  965470 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:32:19.197214  965470 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:32:19.197256  965470 kubeadm.go:319] OS: Linux
	I1208 01:32:19.197303  965470 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:32:19.197352  965470 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:32:19.197401  965470 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:32:19.197450  965470 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:32:19.197499  965470 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:32:19.197547  965470 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:32:19.197594  965470 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:32:19.197642  965470 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:32:19.197689  965470 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:32:19.264787  965470 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:32:19.264938  965470 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:32:19.265056  965470 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:32:19.279249  965470 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:32:19.284744  965470 out.go:252]   - Generating certificates and keys ...
	I1208 01:32:19.284877  965470 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:32:19.284976  965470 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:32:19.285088  965470 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:32:19.285170  965470 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:32:19.285257  965470 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:32:19.285343  965470 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:32:19.285422  965470 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:32:19.285498  965470 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:32:19.285605  965470 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:32:19.285696  965470 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:32:19.285784  965470 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:32:19.285858  965470 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:32:19.353917  965470 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:32:19.456999  965470 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:32:19.576995  965470 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:32:19.899477  965470 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:32:20.131325  965470 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:32:20.132888  965470 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:32:20.137204  965470 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:32:20.141907  965470 out.go:252]   - Booting up control plane ...
	I1208 01:32:20.142028  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:32:20.142113  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:32:20.151419  965470 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:32:20.163492  965470 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:32:20.163996  965470 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:32:20.178745  965470 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:32:20.178866  965470 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:32:20.178914  965470 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:32:20.366487  965470 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:32:20.366609  965470 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:36:20.367832  965470 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001095408s
	I1208 01:36:20.367866  965470 kubeadm.go:319] 
	I1208 01:36:20.368108  965470 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:36:20.368176  965470 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:36:20.368367  965470 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:36:20.368373  965470 kubeadm.go:319] 
	I1208 01:36:20.368804  965470 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:36:20.368865  965470 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:36:20.368920  965470 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:36:20.368925  965470 kubeadm.go:319] 
	I1208 01:36:20.371918  965470 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:36:20.373107  965470 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:36:20.373314  965470 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:36:20.374296  965470 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 01:36:20.374317  965470 kubeadm.go:319] 
	I1208 01:36:20.374441  965470 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:36:20.374509  965470 kubeadm.go:403] duration metric: took 12m9.334957931s to StartCluster
	I1208 01:36:20.374552  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:36:20.374621  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:36:20.416560  965470 cri.go:89] found id: ""
	I1208 01:36:20.416585  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.416594  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:36:20.416641  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:36:20.416714  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:36:20.447485  965470 cri.go:89] found id: ""
	I1208 01:36:20.447508  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.447516  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:36:20.447522  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:36:20.447583  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:36:20.478226  965470 cri.go:89] found id: ""
	I1208 01:36:20.478249  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.478257  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:36:20.478263  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:36:20.478319  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:36:20.513685  965470 cri.go:89] found id: ""
	I1208 01:36:20.513711  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.513720  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:36:20.513732  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:36:20.513793  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:36:20.545157  965470 cri.go:89] found id: ""
	I1208 01:36:20.545180  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.545189  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:36:20.545194  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:36:20.545255  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:36:20.572978  965470 cri.go:89] found id: ""
	I1208 01:36:20.573001  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.573010  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:36:20.573017  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:36:20.573077  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:36:20.613135  965470 cri.go:89] found id: ""
	I1208 01:36:20.613161  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.613169  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:36:20.613176  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:36:20.613238  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:36:20.647679  965470 cri.go:89] found id: ""
	I1208 01:36:20.647712  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.647722  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:36:20.647732  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:36:20.647744  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:36:20.740283  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:36:20.740303  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:36:20.740316  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:36:20.781177  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:36:20.781258  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:36:20.869245  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:36:20.869273  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:36:20.950362  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:36:20.950401  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:36:20.971106  965470 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:36:20.971168  965470 out.go:285] * 
	* 
	W1208 01:36:20.971285  965470 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:36:20.971324  965470 out.go:285] * 
	* 
	W1208 01:36:20.974214  965470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:36:20.980009  965470 out.go:203] 
	W1208 01:36:20.982935  965470 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:36:20.983157  965470 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:36:20.983187  965470 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:36:20.988230  965470 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-386622 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-386622 version --output=json: exit status 1 (98.834339ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-08 01:36:21.705083275 +0000 UTC m=+5028.678823097
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-386622
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-386622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541",
	        "Created": "2025-12-08T01:23:21.031961713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 965658,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:23:56.942599877Z",
	            "FinishedAt": "2025-12-08T01:23:55.854795999Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541/hostname",
	        "HostsPath": "/var/lib/docker/containers/aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541/hosts",
	        "LogPath": "/var/lib/docker/containers/aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541/aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541-json.log",
	        "Name": "/kubernetes-upgrade-386622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-386622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-386622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aec6ae25e902067453a23a87df8e89cf8c4b0471845320ce8029f5785012c541",
	                "LowerDir": "/var/lib/docker/overlay2/2804c85010a8ac71dd981e2b0877324b245168493d230cb1012cc88ccd49f710-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2804c85010a8ac71dd981e2b0877324b245168493d230cb1012cc88ccd49f710/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2804c85010a8ac71dd981e2b0877324b245168493d230cb1012cc88ccd49f710/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2804c85010a8ac71dd981e2b0877324b245168493d230cb1012cc88ccd49f710/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-386622",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-386622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-386622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-386622",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-386622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98e93485d9a4b74f6d6934db9cb55c0b1f5762293d9cef7509a226eb39185fbb",
	            "SandboxKey": "/var/run/docker/netns/98e93485d9a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33733"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33736"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33734"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33735"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-386622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:44:26:1d:d2:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d17f9c0dcdbaeec0040c864bfaebead1ca6972cf9a8bceb6da52d0e6a7e60c8d",
	                    "EndpointID": "9d24e9be305eb539be6a2551997878fa5d9d787a2446415a58353a4fd5ab2d6c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-386622",
	                        "aec6ae25e902"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-386622 -n kubernetes-upgrade-386622
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-386622 -n kubernetes-upgrade-386622: exit status 2 (412.218385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-386622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-386622 logs -n 25: (1.025649191s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p stopped-upgrade-971260                                                                                                   │ stopped-upgrade-971260    │ jenkins │ v1.37.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ running-upgrade-457612    │ jenkins │ v1.35.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:29 UTC │ 08 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-457612                                                                                                   │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	│ start   │ -p pause-814452 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:35 UTC │
	│ start   │ -p pause-814452 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │ 08 Dec 25 01:35 UTC │
	│ pause   │ -p pause-814452 --alsologtostderr -v=5                                                                                      │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │                     │
	│ delete  │ -p pause-814452                                                                                                             │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │ 08 Dec 25 01:35 UTC │
	│ start   │ -p force-systemd-flag-279155 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-279155 │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │ 08 Dec 25 01:36 UTC │
	│ ssh     │ force-systemd-flag-279155 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-279155 │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ delete  │ -p force-systemd-flag-279155                                                                                                │ force-systemd-flag-279155 │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ ssh     │ -p kubenet-000739 sudo cat /etc/nsswitch.conf                                                                               │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo cat /etc/hosts                                                                                       │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo cat /etc/resolv.conf                                                                                 │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo crictl pods                                                                                          │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo crictl ps --all                                                                                      │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                               │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo ip a s                                                                                               │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo ip r s                                                                                               │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo iptables-save                                                                                        │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo iptables -t nat -L -n -v                                                                             │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo systemctl status kubelet --all --full --no-pager                                                     │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo systemctl cat kubelet --no-pager                                                                     │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo journalctl -xeu kubelet --all --full --no-pager                                                      │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p kubenet-000739 sudo cat /etc/kubernetes/kubelet.conf                                                                     │ kubenet-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:35:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:35:44.497274 1000785 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:35:44.497401 1000785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:44.497414 1000785 out.go:374] Setting ErrFile to fd 2...
	I1208 01:35:44.497420 1000785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:44.497668 1000785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:35:44.498112 1000785 out.go:368] Setting JSON to false
	I1208 01:35:44.499111 1000785 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22677,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:35:44.499178 1000785 start.go:143] virtualization:  
	I1208 01:35:44.503230 1000785 out.go:179] * [force-systemd-flag-279155] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:35:44.507881 1000785 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:35:44.507959 1000785 notify.go:221] Checking for updates...
	I1208 01:35:44.515117 1000785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:35:44.518495 1000785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:35:44.521796 1000785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:35:44.524971 1000785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:35:44.528253 1000785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:35:44.531885 1000785 config.go:182] Loaded profile config "kubernetes-upgrade-386622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:35:44.532051 1000785 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:35:44.555446 1000785 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:35:44.555618 1000785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:44.630656 1000785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:35:44.621270969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:44.630766 1000785 docker.go:319] overlay module found
	I1208 01:35:44.635948 1000785 out.go:179] * Using the docker driver based on user configuration
	I1208 01:35:44.638932 1000785 start.go:309] selected driver: docker
	I1208 01:35:44.638957 1000785 start.go:927] validating driver "docker" against <nil>
	I1208 01:35:44.638978 1000785 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:35:44.639712 1000785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:44.695920 1000785 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:35:44.686394937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:44.696082 1000785 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:35:44.696311 1000785 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 01:35:44.699338 1000785 out.go:179] * Using Docker driver with root privileges
	I1208 01:35:44.702356 1000785 cni.go:84] Creating CNI manager for ""
	I1208 01:35:44.702425 1000785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:44.702437 1000785 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:35:44.702517 1000785 start.go:353] cluster config:
	{Name:force-systemd-flag-279155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-279155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:44.705714 1000785 out.go:179] * Starting "force-systemd-flag-279155" primary control-plane node in "force-systemd-flag-279155" cluster
	I1208 01:35:44.708638 1000785 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:35:44.711507 1000785 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:35:44.714337 1000785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:44.714382 1000785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:35:44.714394 1000785 cache.go:65] Caching tarball of preloaded images
	I1208 01:35:44.714396 1000785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:35:44.714480 1000785 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:35:44.714489 1000785 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:35:44.714595 1000785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/config.json ...
	I1208 01:35:44.714613 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/config.json: {Name:mke33aec238d7c9701470126c1ef8fdb41842f05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:44.735339 1000785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:35:44.735361 1000785 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:35:44.735380 1000785 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:35:44.735419 1000785 start.go:360] acquireMachinesLock for force-systemd-flag-279155: {Name:mked254f24c071e5e35f7a983433997441bac923 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:35:44.735522 1000785 start.go:364] duration metric: took 83.251µs to acquireMachinesLock for "force-systemd-flag-279155"
	I1208 01:35:44.735554 1000785 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-279155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-279155 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:35:44.735624 1000785 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:35:44.739178 1000785 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:35:44.739420 1000785 start.go:159] libmachine.API.Create for "force-systemd-flag-279155" (driver="docker")
	I1208 01:35:44.739459 1000785 client.go:173] LocalClient.Create starting
	I1208 01:35:44.739534 1000785 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:35:44.739580 1000785 main.go:143] libmachine: Decoding PEM data...
	I1208 01:35:44.739600 1000785 main.go:143] libmachine: Parsing certificate...
	I1208 01:35:44.739666 1000785 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:35:44.739687 1000785 main.go:143] libmachine: Decoding PEM data...
	I1208 01:35:44.739699 1000785 main.go:143] libmachine: Parsing certificate...
	I1208 01:35:44.740081 1000785 cli_runner.go:164] Run: docker network inspect force-systemd-flag-279155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:35:44.765094 1000785 cli_runner.go:211] docker network inspect force-systemd-flag-279155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:35:44.765218 1000785 network_create.go:284] running [docker network inspect force-systemd-flag-279155] to gather additional debugging logs...
	I1208 01:35:44.765248 1000785 cli_runner.go:164] Run: docker network inspect force-systemd-flag-279155
	W1208 01:35:44.788463 1000785 cli_runner.go:211] docker network inspect force-systemd-flag-279155 returned with exit code 1
	I1208 01:35:44.788500 1000785 network_create.go:287] error running [docker network inspect force-systemd-flag-279155]: docker network inspect force-systemd-flag-279155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-279155 not found
	I1208 01:35:44.788514 1000785 network_create.go:289] output of [docker network inspect force-systemd-flag-279155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-279155 not found
	
	** /stderr **
	I1208 01:35:44.788628 1000785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:35:44.804786 1000785 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:35:44.805090 1000785 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:35:44.805384 1000785 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:35:44.805715 1000785 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d17f9c0dcdba IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:3e:ee:ef:b1:39} reservation:<nil>}
	I1208 01:35:44.806173 1000785 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4a090}
	I1208 01:35:44.806198 1000785 network_create.go:124] attempt to create docker network force-systemd-flag-279155 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:35:44.806258 1000785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-279155 force-systemd-flag-279155
	I1208 01:35:44.873486 1000785 network_create.go:108] docker network force-systemd-flag-279155 192.168.85.0/24 created
	I1208 01:35:44.873519 1000785 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-279155" container
	I1208 01:35:44.873590 1000785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:35:44.889393 1000785 cli_runner.go:164] Run: docker volume create force-systemd-flag-279155 --label name.minikube.sigs.k8s.io=force-systemd-flag-279155 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:35:44.908503 1000785 oci.go:103] Successfully created a docker volume force-systemd-flag-279155
	I1208 01:35:44.908603 1000785 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-279155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-279155 --entrypoint /usr/bin/test -v force-systemd-flag-279155:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:35:45.507834 1000785 oci.go:107] Successfully prepared a docker volume force-systemd-flag-279155
	I1208 01:35:45.507901 1000785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:45.507911 1000785 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:35:45.507975 1000785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-279155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:35:49.551599 1000785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-279155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.043589488s)
	I1208 01:35:49.551637 1000785 kic.go:203] duration metric: took 4.043722068s to extract preloaded images to volume ...
	W1208 01:35:49.551785 1000785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:35:49.551897 1000785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:35:49.607134 1000785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-279155 --name force-systemd-flag-279155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-279155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-279155 --network force-systemd-flag-279155 --ip 192.168.85.2 --volume force-systemd-flag-279155:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:35:49.926096 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Running}}
	I1208 01:35:49.947230 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:35:49.966664 1000785 cli_runner.go:164] Run: docker exec force-systemd-flag-279155 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:35:50.036921 1000785 oci.go:144] the created container "force-systemd-flag-279155" has a running status.
	I1208 01:35:50.036953 1000785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa...
	I1208 01:35:50.326415 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1208 01:35:50.326461 1000785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:35:50.345741 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:35:50.369686 1000785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:35:50.369706 1000785 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-279155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:35:50.455412 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:35:50.486575 1000785 machine.go:94] provisionDockerMachine start ...
	I1208 01:35:50.486686 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:50.511840 1000785 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:50.512178 1000785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1208 01:35:50.512187 1000785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:35:50.512797 1000785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:35:53.670388 1000785 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-279155
	
	I1208 01:35:53.670414 1000785 ubuntu.go:182] provisioning hostname "force-systemd-flag-279155"
	I1208 01:35:53.670489 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:53.687522 1000785 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:53.687840 1000785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1208 01:35:53.687858 1000785 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-279155 && echo "force-systemd-flag-279155" | sudo tee /etc/hostname
	I1208 01:35:53.847970 1000785 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-279155
	
	I1208 01:35:53.848051 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:53.866468 1000785 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:53.866792 1000785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1208 01:35:53.866809 1000785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-279155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-279155/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-279155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:35:54.019507 1000785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:35:54.019539 1000785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:35:54.019565 1000785 ubuntu.go:190] setting up certificates
	I1208 01:35:54.019575 1000785 provision.go:84] configureAuth start
	I1208 01:35:54.019651 1000785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-279155
	I1208 01:35:54.036964 1000785 provision.go:143] copyHostCerts
	I1208 01:35:54.037013 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:35:54.037047 1000785 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:35:54.037054 1000785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:35:54.037137 1000785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:35:54.037222 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:35:54.037239 1000785 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:35:54.037244 1000785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:35:54.037271 1000785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:35:54.037308 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:35:54.037327 1000785 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:35:54.037331 1000785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:35:54.037356 1000785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:35:54.037427 1000785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-279155 san=[127.0.0.1 192.168.85.2 force-systemd-flag-279155 localhost minikube]
	I1208 01:35:54.310693 1000785 provision.go:177] copyRemoteCerts
	I1208 01:35:54.310790 1000785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:35:54.310874 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:54.327534 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:35:54.434876 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 01:35:54.434941 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 01:35:54.452962 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 01:35:54.453074 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:35:54.470936 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 01:35:54.471013 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:35:54.488486 1000785 provision.go:87] duration metric: took 468.879148ms to configureAuth
	I1208 01:35:54.488526 1000785 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:35:54.488707 1000785 config.go:182] Loaded profile config "force-systemd-flag-279155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:54.488817 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:54.517159 1000785 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:54.517473 1000785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1208 01:35:54.517493 1000785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:35:54.818519 1000785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:35:54.818540 1000785 machine.go:97] duration metric: took 4.331945129s to provisionDockerMachine
	I1208 01:35:54.818551 1000785 client.go:176] duration metric: took 10.079081629s to LocalClient.Create
	I1208 01:35:54.818574 1000785 start.go:167] duration metric: took 10.079156001s to libmachine.API.Create "force-systemd-flag-279155"
	I1208 01:35:54.818583 1000785 start.go:293] postStartSetup for "force-systemd-flag-279155" (driver="docker")
	I1208 01:35:54.818593 1000785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:35:54.818671 1000785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:35:54.818710 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:54.835683 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:35:54.938728 1000785 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:35:54.942022 1000785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:35:54.942049 1000785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:35:54.942060 1000785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:35:54.942143 1000785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:35:54.942241 1000785 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:35:54.942253 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /etc/ssl/certs/7918072.pem
	I1208 01:35:54.942359 1000785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:35:54.949527 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:54.966724 1000785 start.go:296] duration metric: took 148.125978ms for postStartSetup
	I1208 01:35:54.967100 1000785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-279155
	I1208 01:35:54.983232 1000785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/config.json ...
	I1208 01:35:54.983523 1000785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:35:54.983576 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:55.001385 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:35:55.108022 1000785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:35:55.112867 1000785 start.go:128] duration metric: took 10.377228214s to createHost
	I1208 01:35:55.112893 1000785 start.go:83] releasing machines lock for "force-systemd-flag-279155", held for 10.377356281s
	I1208 01:35:55.112963 1000785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-279155
	I1208 01:35:55.131538 1000785 ssh_runner.go:195] Run: cat /version.json
	I1208 01:35:55.131587 1000785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:35:55.131596 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:55.131653 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:35:55.149635 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:35:55.152216 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:35:55.250772 1000785 ssh_runner.go:195] Run: systemctl --version
	I1208 01:35:55.361699 1000785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:35:55.397011 1000785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:35:55.401200 1000785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:35:55.401272 1000785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:35:55.429186 1000785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:35:55.429213 1000785 start.go:496] detecting cgroup driver to use...
	I1208 01:35:55.429226 1000785 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1208 01:35:55.429279 1000785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:35:55.447189 1000785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:35:55.459925 1000785 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:35:55.459994 1000785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:35:55.477635 1000785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:35:55.496916 1000785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:35:55.612572 1000785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:35:55.744440 1000785 docker.go:234] disabling docker service ...
	I1208 01:35:55.744553 1000785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:35:55.765829 1000785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:35:55.778948 1000785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:35:55.893075 1000785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:35:56.029557 1000785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:35:56.045406 1000785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:35:56.061444 1000785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:35:56.061565 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.071694 1000785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1208 01:35:56.071831 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.081264 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.090454 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.100032 1000785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:35:56.108068 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.116442 1000785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.130722 1000785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:56.139386 1000785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:35:56.146778 1000785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:35:56.154048 1000785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:56.270316 1000785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:35:56.453604 1000785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:35:56.453728 1000785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:35:56.457615 1000785 start.go:564] Will wait 60s for crictl version
	I1208 01:35:56.457733 1000785 ssh_runner.go:195] Run: which crictl
	I1208 01:35:56.461257 1000785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:35:56.487030 1000785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:35:56.487136 1000785 ssh_runner.go:195] Run: crio --version
	I1208 01:35:56.515870 1000785 ssh_runner.go:195] Run: crio --version
	I1208 01:35:56.549527 1000785 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:35:56.552284 1000785 cli_runner.go:164] Run: docker network inspect force-systemd-flag-279155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:35:56.567964 1000785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:35:56.571737 1000785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:35:56.581379 1000785 kubeadm.go:884] updating cluster {Name:force-systemd-flag-279155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-279155 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:35:56.581512 1000785 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:56.581570 1000785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:56.615408 1000785 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:56.615437 1000785 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:35:56.615492 1000785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:56.640186 1000785 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:56.640211 1000785 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:35:56.640219 1000785 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:35:56.640311 1000785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-279155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-279155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:35:56.640406 1000785 ssh_runner.go:195] Run: crio config
	I1208 01:35:56.698671 1000785 cni.go:84] Creating CNI manager for ""
	I1208 01:35:56.698693 1000785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:56.698714 1000785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:35:56.698736 1000785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-279155 NodeName:force-systemd-flag-279155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:35:56.698911 1000785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-279155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:35:56.698986 1000785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:35:56.706642 1000785 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:35:56.706731 1000785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:35:56.714352 1000785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1208 01:35:56.727501 1000785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:35:56.740100 1000785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:35:56.753013 1000785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:35:56.757302 1000785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:35:56.766876 1000785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:56.894635 1000785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:35:56.911329 1000785 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155 for IP: 192.168.85.2
	I1208 01:35:56.911391 1000785 certs.go:195] generating shared ca certs ...
	I1208 01:35:56.911421 1000785 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:56.911588 1000785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:35:56.911677 1000785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:35:56.911712 1000785 certs.go:257] generating profile certs ...
	I1208 01:35:56.911793 1000785 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key
	I1208 01:35:56.911837 1000785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt with IP's: []
	I1208 01:35:57.124323 1000785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt ...
	I1208 01:35:57.124404 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt: {Name:mk7b8628f0e4527b4a868a8ba2004321cb127bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.124628 1000785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key ...
	I1208 01:35:57.124675 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key: {Name:mkc39ad0f7fa5a0c7d686e9fd83439e14c787af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.124808 1000785 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key.95fee182
	I1208 01:35:57.124845 1000785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt.95fee182 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:35:57.253738 1000785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt.95fee182 ...
	I1208 01:35:57.253777 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt.95fee182: {Name:mk1a3f3ffd33932d2aab234143349dbbd0e4cc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.253990 1000785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key.95fee182 ...
	I1208 01:35:57.254008 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key.95fee182: {Name:mka1189d9c608f2ede75fab957542ea281767a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.254129 1000785 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt.95fee182 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt
	I1208 01:35:57.254229 1000785 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key.95fee182 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key
	I1208 01:35:57.254305 1000785 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.key
	I1208 01:35:57.254326 1000785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.crt with IP's: []
	I1208 01:35:57.521921 1000785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.crt ...
	I1208 01:35:57.521953 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.crt: {Name:mk8ea2906b48786360b5806200b441d3c4190710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.522136 1000785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.key ...
	I1208 01:35:57.522160 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.key: {Name:mkf74473d777d157189c91191bb2e6826ea79ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:57.522235 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 01:35:57.522259 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 01:35:57.522272 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 01:35:57.522287 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 01:35:57.522303 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 01:35:57.522315 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 01:35:57.522335 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 01:35:57.522347 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 01:35:57.522397 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:35:57.522444 1000785 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:35:57.522455 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:35:57.522482 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:35:57.522514 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:35:57.522542 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:35:57.522598 1000785 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:57.522637 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem -> /usr/share/ca-certificates/791807.pem
	I1208 01:35:57.522655 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> /usr/share/ca-certificates/7918072.pem
	I1208 01:35:57.522667 1000785 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:57.523237 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:35:57.542355 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:35:57.561154 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:35:57.578639 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:35:57.597805 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1208 01:35:57.616014 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:35:57.633060 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:35:57.651312 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:35:57.669788 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:35:57.687210 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:35:57.704517 1000785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:35:57.721805 1000785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:35:57.734441 1000785 ssh_runner.go:195] Run: openssl version
	I1208 01:35:57.740840 1000785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:35:57.748784 1000785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:35:57.756500 1000785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:35:57.761124 1000785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:35:57.761241 1000785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:35:57.808098 1000785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:35:57.815612 1000785 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:35:57.822963 1000785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:35:57.830766 1000785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:35:57.843270 1000785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:35:57.847324 1000785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:35:57.847439 1000785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:35:57.889510 1000785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:35:57.897283 1000785 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:35:57.904940 1000785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:57.912499 1000785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:35:57.920277 1000785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:57.925427 1000785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:57.925544 1000785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:57.966663 1000785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:35:57.974363 1000785 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:35:57.981884 1000785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:35:57.985624 1000785 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:35:57.985689 1000785 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-279155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-279155 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:57.985788 1000785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:35:57.985864 1000785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:35:58.018267 1000785 cri.go:89] found id: ""
	I1208 01:35:58.018351 1000785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:35:58.026945 1000785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:35:58.035988 1000785 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:35:58.036055 1000785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:35:58.045271 1000785 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:35:58.045336 1000785 kubeadm.go:158] found existing configuration files:
	
	I1208 01:35:58.045406 1000785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:35:58.053249 1000785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:35:58.053318 1000785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:35:58.061261 1000785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:35:58.069343 1000785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:35:58.069412 1000785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:35:58.077373 1000785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:35:58.085461 1000785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:35:58.085586 1000785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:35:58.093400 1000785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:35:58.101918 1000785 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:35:58.101987 1000785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:35:58.109704 1000785 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:35:58.149869 1000785 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 01:35:58.150145 1000785 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:35:58.172391 1000785 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:35:58.172510 1000785 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:35:58.172590 1000785 kubeadm.go:319] OS: Linux
	I1208 01:35:58.172664 1000785 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:35:58.172732 1000785 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:35:58.172801 1000785 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:35:58.172871 1000785 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:35:58.172967 1000785 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:35:58.173038 1000785 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:35:58.173093 1000785 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:35:58.173148 1000785 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:35:58.173200 1000785 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:35:58.237380 1000785 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:35:58.237563 1000785 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:35:58.237678 1000785 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:35:58.244346 1000785 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:35:58.250935 1000785 out.go:252]   - Generating certificates and keys ...
	I1208 01:35:58.251093 1000785 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:35:58.251199 1000785 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:35:58.421209 1000785 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:35:59.026462 1000785 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:35:59.275780 1000785 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:35:59.723618 1000785 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:36:01.502347 1000785 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:36:01.502819 1000785 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-279155 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:36:02.636031 1000785 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:36:02.636458 1000785 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-279155 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:36:03.208061 1000785 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:36:04.242850 1000785 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:36:04.288672 1000785 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:36:04.288922 1000785 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:36:04.419977 1000785 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:36:05.379886 1000785 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:36:05.501090 1000785 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:36:05.954953 1000785 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:36:06.329897 1000785 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:36:06.330818 1000785 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:36:06.334583 1000785 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:36:06.338018 1000785 out.go:252]   - Booting up control plane ...
	I1208 01:36:06.338142 1000785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:36:06.338238 1000785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:36:06.339572 1000785 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:36:06.354990 1000785 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:36:06.355130 1000785 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:36:06.362492 1000785 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:36:06.363099 1000785 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:36:06.363435 1000785 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:36:06.500517 1000785 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:36:06.500643 1000785 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:36:07.498046 1000785 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001153673s
	I1208 01:36:07.501718 1000785 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 01:36:07.501814 1000785 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1208 01:36:07.501903 1000785 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 01:36:07.501981 1000785 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 01:36:12.062798 1000785 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.560974882s
	I1208 01:36:12.665991 1000785 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.163945058s
	I1208 01:36:13.503426 1000785 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001564013s
	I1208 01:36:13.535631 1000785 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 01:36:13.554431 1000785 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 01:36:13.570873 1000785 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 01:36:13.571087 1000785 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-279155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 01:36:13.583544 1000785 kubeadm.go:319] [bootstrap-token] Using token: zs8x94.3ckmqjegcd1ye8m5
	I1208 01:36:13.586632 1000785 out.go:252]   - Configuring RBAC rules ...
	I1208 01:36:13.586794 1000785 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 01:36:13.594432 1000785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 01:36:13.604116 1000785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 01:36:13.609045 1000785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 01:36:13.613591 1000785 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 01:36:13.620808 1000785 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 01:36:13.910899 1000785 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 01:36:14.462510 1000785 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 01:36:14.910079 1000785 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 01:36:14.911645 1000785 kubeadm.go:319] 
	I1208 01:36:14.911732 1000785 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 01:36:14.911742 1000785 kubeadm.go:319] 
	I1208 01:36:14.911824 1000785 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 01:36:14.911831 1000785 kubeadm.go:319] 
	I1208 01:36:14.911864 1000785 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 01:36:14.911929 1000785 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 01:36:14.911983 1000785 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 01:36:14.911990 1000785 kubeadm.go:319] 
	I1208 01:36:14.912071 1000785 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 01:36:14.912087 1000785 kubeadm.go:319] 
	I1208 01:36:14.912140 1000785 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 01:36:14.912150 1000785 kubeadm.go:319] 
	I1208 01:36:14.912202 1000785 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 01:36:14.912280 1000785 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 01:36:14.912352 1000785 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 01:36:14.912359 1000785 kubeadm.go:319] 
	I1208 01:36:14.912443 1000785 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 01:36:14.912525 1000785 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 01:36:14.912531 1000785 kubeadm.go:319] 
	I1208 01:36:14.912615 1000785 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zs8x94.3ckmqjegcd1ye8m5 \
	I1208 01:36:14.912720 1000785 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 01:36:14.912744 1000785 kubeadm.go:319] 	--control-plane 
	I1208 01:36:14.912751 1000785 kubeadm.go:319] 
	I1208 01:36:14.912837 1000785 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 01:36:14.912844 1000785 kubeadm.go:319] 
	I1208 01:36:14.912927 1000785 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zs8x94.3ckmqjegcd1ye8m5 \
	I1208 01:36:14.913033 1000785 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 01:36:14.917311 1000785 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 01:36:14.917541 1000785 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:36:14.917649 1000785 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:36:14.917669 1000785 cni.go:84] Creating CNI manager for ""
	I1208 01:36:14.917677 1000785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:36:14.920786 1000785 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 01:36:14.923630 1000785 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 01:36:14.927761 1000785 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 01:36:14.927779 1000785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 01:36:14.941752 1000785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 01:36:15.246709 1000785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 01:36:15.246873 1000785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:36:15.246963 1000785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-279155 minikube.k8s.io/updated_at=2025_12_08T01_36_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=force-systemd-flag-279155 minikube.k8s.io/primary=true
	I1208 01:36:15.376176 1000785 ops.go:34] apiserver oom_adj: -16
	I1208 01:36:15.381459 1000785 kubeadm.go:1114] duration metric: took 134.659393ms to wait for elevateKubeSystemPrivileges
	I1208 01:36:15.381487 1000785 kubeadm.go:403] duration metric: took 17.395802562s to StartCluster
	I1208 01:36:15.381504 1000785 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:36:15.381567 1000785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:36:15.382497 1000785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:36:15.382700 1000785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:36:15.382828 1000785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 01:36:15.383073 1000785 config.go:182] Loaded profile config "force-systemd-flag-279155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:36:15.383111 1000785 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:36:15.383212 1000785 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-flag-279155"
	I1208 01:36:15.383233 1000785 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-flag-279155"
	I1208 01:36:15.383256 1000785 host.go:66] Checking if "force-systemd-flag-279155" exists ...
	I1208 01:36:15.383756 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:36:15.383950 1000785 addons.go:70] Setting default-storageclass=true in profile "force-systemd-flag-279155"
	I1208 01:36:15.383972 1000785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-279155"
	I1208 01:36:15.384216 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:36:15.386169 1000785 out.go:179] * Verifying Kubernetes components...
	I1208 01:36:15.391917 1000785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:36:15.426573 1000785 kapi.go:59] client config for force-systemd-flag-279155: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:36:15.427187 1000785 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 01:36:15.427200 1000785 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 01:36:15.427206 1000785 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 01:36:15.427210 1000785 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 01:36:15.427214 1000785 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 01:36:15.427603 1000785 addons.go:239] Setting addon default-storageclass=true in "force-systemd-flag-279155"
	I1208 01:36:15.427636 1000785 host.go:66] Checking if "force-systemd-flag-279155" exists ...
	I1208 01:36:15.428062 1000785 cli_runner.go:164] Run: docker container inspect force-systemd-flag-279155 --format={{.State.Status}}
	I1208 01:36:15.428242 1000785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:36:15.428563 1000785 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 01:36:15.431305 1000785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:36:15.431332 1000785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:36:15.431395 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:36:15.463290 1000785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:36:15.463312 1000785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:36:15.463376 1000785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-279155
	I1208 01:36:15.482941 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:36:15.499568 1000785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/force-systemd-flag-279155/id_rsa Username:docker}
	I1208 01:36:15.703530 1000785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 01:36:15.703703 1000785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:36:15.754882 1000785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:36:15.847470 1000785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:36:16.051649 1000785 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1208 01:36:16.052280 1000785 kapi.go:59] client config for force-systemd-flag-279155: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:36:16.053330 1000785 kapi.go:59] client config for force-systemd-flag-279155: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/force-systemd-flag-279155/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:36:16.053582 1000785 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:36:16.053631 1000785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:36:16.394675 1000785 api_server.go:72] duration metric: took 1.011947317s to wait for apiserver process to appear ...
	I1208 01:36:16.394697 1000785 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:36:16.394715 1000785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:36:16.399449 1000785 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1208 01:36:16.402447 1000785 addons.go:530] duration metric: took 1.019323843s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1208 01:36:16.405549 1000785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:36:16.406667 1000785 api_server.go:141] control plane version: v1.34.2
	I1208 01:36:16.406693 1000785 api_server.go:131] duration metric: took 11.988908ms to wait for apiserver health ...
	I1208 01:36:16.406703 1000785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:36:16.411172 1000785 system_pods.go:59] 5 kube-system pods found
	I1208 01:36:16.411211 1000785 system_pods.go:61] "etcd-force-systemd-flag-279155" [a6b79464-2fe0-4a74-ab1e-d621c1502696] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:36:16.411221 1000785 system_pods.go:61] "kube-apiserver-force-systemd-flag-279155" [ff93d167-398e-44a8-ab49-57a1bcc365f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:36:16.411230 1000785 system_pods.go:61] "kube-controller-manager-force-systemd-flag-279155" [06827963-7bfd-4bce-9e0b-98e2ab0e0538] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:36:16.411239 1000785 system_pods.go:61] "kube-scheduler-force-systemd-flag-279155" [aea80be4-9408-46c2-a77e-66e960758ed4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:36:16.411249 1000785 system_pods.go:61] "storage-provisioner" [267842af-a3cd-45a8-b7f9-10cddbb708a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1208 01:36:16.411261 1000785 system_pods.go:74] duration metric: took 4.55003ms to wait for pod list to return data ...
	I1208 01:36:16.411277 1000785 kubeadm.go:587] duration metric: took 1.028554587s to wait for: map[apiserver:true system_pods:true]
	I1208 01:36:16.411300 1000785 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:36:16.413804 1000785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:36:16.413838 1000785 node_conditions.go:123] node cpu capacity is 2
	I1208 01:36:16.413860 1000785 node_conditions.go:105] duration metric: took 2.553574ms to run NodePressure ...
	I1208 01:36:16.413874 1000785 start.go:242] waiting for startup goroutines ...
	I1208 01:36:16.555540 1000785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-279155" context rescaled to 1 replicas
	I1208 01:36:16.555582 1000785 start.go:247] waiting for cluster config update ...
	I1208 01:36:16.555595 1000785 start.go:256] writing updated cluster config ...
	I1208 01:36:16.555890 1000785 ssh_runner.go:195] Run: rm -f paused
	I1208 01:36:16.611658 1000785 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:36:16.615005 1000785 out.go:179] * Done! kubectl is now configured to use "force-systemd-flag-279155" cluster and "default" namespace by default
	I1208 01:36:20.367832  965470 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001095408s
	I1208 01:36:20.367866  965470 kubeadm.go:319] 
	I1208 01:36:20.368108  965470 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:36:20.368176  965470 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:36:20.368367  965470 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:36:20.368373  965470 kubeadm.go:319] 
	I1208 01:36:20.368804  965470 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:36:20.368865  965470 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:36:20.368920  965470 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:36:20.368925  965470 kubeadm.go:319] 
	I1208 01:36:20.371918  965470 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:36:20.373107  965470 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:36:20.373314  965470 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:36:20.374296  965470 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 01:36:20.374317  965470 kubeadm.go:319] 
	I1208 01:36:20.374441  965470 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:36:20.374509  965470 kubeadm.go:403] duration metric: took 12m9.334957931s to StartCluster
	I1208 01:36:20.374552  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:36:20.374621  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:36:20.416560  965470 cri.go:89] found id: ""
	I1208 01:36:20.416585  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.416594  965470 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:36:20.416641  965470 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:36:20.416714  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:36:20.447485  965470 cri.go:89] found id: ""
	I1208 01:36:20.447508  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.447516  965470 logs.go:284] No container was found matching "etcd"
	I1208 01:36:20.447522  965470 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:36:20.447583  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:36:20.478226  965470 cri.go:89] found id: ""
	I1208 01:36:20.478249  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.478257  965470 logs.go:284] No container was found matching "coredns"
	I1208 01:36:20.478263  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:36:20.478319  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:36:20.513685  965470 cri.go:89] found id: ""
	I1208 01:36:20.513711  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.513720  965470 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:36:20.513732  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:36:20.513793  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:36:20.545157  965470 cri.go:89] found id: ""
	I1208 01:36:20.545180  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.545189  965470 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:36:20.545194  965470 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:36:20.545255  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:36:20.572978  965470 cri.go:89] found id: ""
	I1208 01:36:20.573001  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.573010  965470 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:36:20.573017  965470 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:36:20.573077  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:36:20.613135  965470 cri.go:89] found id: ""
	I1208 01:36:20.613161  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.613169  965470 logs.go:284] No container was found matching "kindnet"
	I1208 01:36:20.613176  965470 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:36:20.613238  965470 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:36:20.647679  965470 cri.go:89] found id: ""
	I1208 01:36:20.647712  965470 logs.go:282] 0 containers: []
	W1208 01:36:20.647722  965470 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:36:20.647732  965470 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:36:20.647744  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:36:20.740283  965470 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:36:20.740303  965470 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:36:20.740316  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:36:20.781177  965470 logs.go:123] Gathering logs for container status ...
	I1208 01:36:20.781258  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:36:20.869245  965470 logs.go:123] Gathering logs for kubelet ...
	I1208 01:36:20.869273  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:36:20.950362  965470 logs.go:123] Gathering logs for dmesg ...
	I1208 01:36:20.950401  965470 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:36:20.971106  965470 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:36:20.971168  965470 out.go:285] * 
	W1208 01:36:20.971285  965470 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:36:20.971324  965470 out.go:285] * 
	W1208 01:36:20.974214  965470 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:36:20.980009  965470 out.go:203] 
	W1208 01:36:20.982935  965470 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:36:20.983157  965470 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:36:20.983187  965470 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:36:20.988230  965470 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648076726Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648286591Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648406863Z" level=info msg="Create NRI interface"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.64856844Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648650877Z" level=info msg="runtime interface created"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648711292Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648777443Z" level=info msg="runtime interface starting up..."
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648842567Z" level=info msg="starting plugins..."
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.648933399Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:24:03 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:24:03.649090791Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:24:03 kubernetes-upgrade-386622 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.811384061Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a842bdd-71fc-427c-b702-ae74e5c39129 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.812106414Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a2f5c41f-5e37-4a9c-be03-11909ac712fe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.812623736Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=346b60e9-23f2-4936-bd8b-a79769d0210b name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.813192274Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1aa13325-57e8-423a-8d5d-fe2fd3eb1d71 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.813731488Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7fe31c45-8ec5-48bc-9eed-b61e6d521851 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.814221256Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4dd0ee56-0ff4-4bdd-8cc6-f72dc8537969 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:28:16 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:28:16.814681667Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c2a2d25f-7f1d-4d7e-add9-3ec5140a78ca name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.268486235Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b9ef649c-9dc9-409e-b981-401138a389de name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.26936772Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f0180204-3125-4a9d-9314-384ba9909450 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.269942068Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c9fc07dc-dc05-4725-8788-84c8ee257c84 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.270526402Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f5347090-1259-44da-82e9-a567b24940be name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.271257149Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1652283d-2809-4da2-aeea-d0846d3acd10 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.271684173Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ef1a684c-5c15-41a3-94eb-8522e8faea54 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:32:19 kubernetes-upgrade-386622 crio[617]: time="2025-12-08T01:32:19.272122487Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3922c9b3-40f5-4ba2-a0a1-c775607683af name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 01:00] overlayfs: idmapped layers are currently not supported
	[  +3.041176] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:02] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:03] overlayfs: idmapped layers are currently not supported
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:36:23 up  6:18,  0 user,  load average: 3.10, 1.90, 1.80
	Linux kubernetes-upgrade-386622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:36:20 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:36:21 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 08 01:36:21 kubernetes-upgrade-386622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:21 kubernetes-upgrade-386622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:21 kubernetes-upgrade-386622 kubelet[12335]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:21 kubernetes-upgrade-386622 kubelet[12335]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:21 kubernetes-upgrade-386622 kubelet[12335]: E1208 01:36:21.585939   12335 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:36:21 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:36:21 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:36:22 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 08 01:36:22 kubernetes-upgrade-386622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:22 kubernetes-upgrade-386622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:22 kubernetes-upgrade-386622 kubelet[12355]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:22 kubernetes-upgrade-386622 kubelet[12355]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:22 kubernetes-upgrade-386622 kubelet[12355]: E1208 01:36:22.333586   12355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:36:22 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:36:22 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:36:23 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 08 01:36:23 kubernetes-upgrade-386622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:23 kubernetes-upgrade-386622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:36:23 kubernetes-upgrade-386622 kubelet[12448]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:23 kubernetes-upgrade-386622 kubelet[12448]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:36:23 kubernetes-upgrade-386622 kubelet[12448]: E1208 01:36:23.107102   12448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:36:23 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:36:23 kubernetes-upgrade-386622 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-386622 -n kubernetes-upgrade-386622
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-386622 -n kubernetes-upgrade-386622: exit status 2 (420.781736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-386622" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-386622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-386622
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-386622: (2.45495369s)
--- FAIL: TestKubernetesUpgrade (791.33s)

                                                
                                    
x
+
TestPause/serial/Pause (6.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-814452 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-814452 --alsologtostderr -v=5: exit status 80 (1.658096964s)

                                                
                                                
-- stdout --
	* Pausing node pause-814452 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:35:35.896542  999333 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:35:35.896701  999333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:35.896707  999333 out.go:374] Setting ErrFile to fd 2...
	I1208 01:35:35.896712  999333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:35.896972  999333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:35:35.897232  999333 out.go:368] Setting JSON to false
	I1208 01:35:35.897250  999333 mustload.go:66] Loading cluster: pause-814452
	I1208 01:35:35.897659  999333 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:35.898174  999333 cli_runner.go:164] Run: docker container inspect pause-814452 --format={{.State.Status}}
	I1208 01:35:35.922648  999333 host.go:66] Checking if "pause-814452" exists ...
	I1208 01:35:35.923014  999333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:36.032856  999333 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:35:36.021282203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:36.033503  999333 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-814452 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1208 01:35:36.037011  999333 out.go:179] * Pausing node pause-814452 ... 
	I1208 01:35:36.040782  999333 host.go:66] Checking if "pause-814452" exists ...
	I1208 01:35:36.041159  999333 ssh_runner.go:195] Run: systemctl --version
	I1208 01:35:36.041212  999333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:36.067046  999333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:36.177431  999333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:35:36.190103  999333 pause.go:52] kubelet running: true
	I1208 01:35:36.190172  999333 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:35:36.414259  999333 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:35:36.414337  999333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:35:36.479082  999333 cri.go:89] found id: "516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9"
	I1208 01:35:36.479106  999333 cri.go:89] found id: "a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7"
	I1208 01:35:36.479112  999333 cri.go:89] found id: "5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70"
	I1208 01:35:36.479116  999333 cri.go:89] found id: "0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652"
	I1208 01:35:36.479119  999333 cri.go:89] found id: "f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff"
	I1208 01:35:36.479123  999333 cri.go:89] found id: "72fae15be4bbe10f53114796f4ea74adf286d63828cec603838a2da804289607"
	I1208 01:35:36.479126  999333 cri.go:89] found id: "77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8"
	I1208 01:35:36.479128  999333 cri.go:89] found id: "0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888"
	I1208 01:35:36.479132  999333 cri.go:89] found id: "c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc"
	I1208 01:35:36.479137  999333 cri.go:89] found id: "4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125"
	I1208 01:35:36.479140  999333 cri.go:89] found id: "a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a"
	I1208 01:35:36.479143  999333 cri.go:89] found id: "1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16"
	I1208 01:35:36.479146  999333 cri.go:89] found id: "fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c"
	I1208 01:35:36.479150  999333 cri.go:89] found id: "8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14"
	I1208 01:35:36.479153  999333 cri.go:89] found id: ""
	I1208 01:35:36.479201  999333 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:35:36.490012  999333 retry.go:31] will retry after 238.846582ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:36Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:35:36.729528  999333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:35:36.742531  999333 pause.go:52] kubelet running: false
	I1208 01:35:36.742641  999333 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:35:36.889688  999333 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:35:36.889814  999333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:35:36.965159  999333 cri.go:89] found id: "516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9"
	I1208 01:35:36.965190  999333 cri.go:89] found id: "a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7"
	I1208 01:35:36.965196  999333 cri.go:89] found id: "5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70"
	I1208 01:35:36.965200  999333 cri.go:89] found id: "0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652"
	I1208 01:35:36.965203  999333 cri.go:89] found id: "f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff"
	I1208 01:35:36.965206  999333 cri.go:89] found id: "72fae15be4bbe10f53114796f4ea74adf286d63828cec603838a2da804289607"
	I1208 01:35:36.965210  999333 cri.go:89] found id: "77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8"
	I1208 01:35:36.965227  999333 cri.go:89] found id: "0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888"
	I1208 01:35:36.965233  999333 cri.go:89] found id: "c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc"
	I1208 01:35:36.965240  999333 cri.go:89] found id: "4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125"
	I1208 01:35:36.965247  999333 cri.go:89] found id: "a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a"
	I1208 01:35:36.965249  999333 cri.go:89] found id: "1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16"
	I1208 01:35:36.965252  999333 cri.go:89] found id: "fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c"
	I1208 01:35:36.965267  999333 cri.go:89] found id: "8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14"
	I1208 01:35:36.965270  999333 cri.go:89] found id: ""
	I1208 01:35:36.965338  999333 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:35:36.976461  999333 retry.go:31] will retry after 220.601774ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:36Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:35:37.197978  999333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:35:37.211198  999333 pause.go:52] kubelet running: false
	I1208 01:35:37.211307  999333 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:35:37.376243  999333 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:35:37.376373  999333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:35:37.444592  999333 cri.go:89] found id: "516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9"
	I1208 01:35:37.444618  999333 cri.go:89] found id: "a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7"
	I1208 01:35:37.444623  999333 cri.go:89] found id: "5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70"
	I1208 01:35:37.444627  999333 cri.go:89] found id: "0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652"
	I1208 01:35:37.444631  999333 cri.go:89] found id: "f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff"
	I1208 01:35:37.444634  999333 cri.go:89] found id: "72fae15be4bbe10f53114796f4ea74adf286d63828cec603838a2da804289607"
	I1208 01:35:37.444637  999333 cri.go:89] found id: "77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8"
	I1208 01:35:37.444640  999333 cri.go:89] found id: "0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888"
	I1208 01:35:37.444643  999333 cri.go:89] found id: "c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc"
	I1208 01:35:37.444649  999333 cri.go:89] found id: "4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125"
	I1208 01:35:37.444652  999333 cri.go:89] found id: "a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a"
	I1208 01:35:37.444655  999333 cri.go:89] found id: "1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16"
	I1208 01:35:37.444659  999333 cri.go:89] found id: "fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c"
	I1208 01:35:37.444664  999333 cri.go:89] found id: "8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14"
	I1208 01:35:37.444667  999333 cri.go:89] found id: ""
	I1208 01:35:37.444718  999333 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:35:37.459487  999333 out.go:203] 
	W1208 01:35:37.462467  999333 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 01:35:37.462491  999333 out.go:285] * 
	* 
	W1208 01:35:37.469687  999333 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:35:37.472875  999333 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-814452 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-814452
helpers_test.go:243: (dbg) docker inspect pause-814452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688",
	        "Created": "2025-12-08T01:34:05.276754584Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:34:05.347890283Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/hostname",
	        "HostsPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/hosts",
	        "LogPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688-json.log",
	        "Name": "/pause-814452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-814452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-814452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688",
	                "LowerDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-814452",
	                "Source": "/var/lib/docker/volumes/pause-814452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-814452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-814452",
	                "name.minikube.sigs.k8s.io": "pause-814452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d776182e7a89fc3a1a42b3db5b334e4ddebf426833000a2bab88e7497d5c8ad6",
	            "SandboxKey": "/var/run/docker/netns/d776182e7a89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33751"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-814452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:57:ee:ff:94:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2ba844c4593cb0ab79941bf15dcda381823827aca81dbb7d6fcf6500ea56fbd",
	                    "EndpointID": "343fc667fb1f7a64e531d09ab3425f1407b8a0466d24ca539f00436a108757e7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-814452",
	                        "8110bb7c02fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-814452 -n pause-814452
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-814452 -n pause-814452: exit status 2 (350.564819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-814452 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-814452 logs -n 25: (1.42459469s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-526754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:21 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p missing-upgrade-156445 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-156445    │ jenkins │ v1.35.0 │ 08 Dec 25 01:21 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p missing-upgrade-156445 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-156445    │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:23 UTC │
	│ delete  │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-526754 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ stop    │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-526754 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-526754 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ delete  │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ delete  │ -p missing-upgrade-156445                                                                                                                       │ missing-upgrade-156445    │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p stopped-upgrade-971260 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-971260    │ jenkins │ v1.35.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:24 UTC │
	│ stop    │ -p kubernetes-upgrade-386622                                                                                                                    │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ stop    │ stopped-upgrade-971260 stop                                                                                                                     │ stopped-upgrade-971260    │ jenkins │ v1.35.0 │ 08 Dec 25 01:24 UTC │ 08 Dec 25 01:24 UTC │
	│ start   │ -p stopped-upgrade-971260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-971260    │ jenkins │ v1.37.0 │ 08 Dec 25 01:24 UTC │ 08 Dec 25 01:28 UTC │
	│ delete  │ -p stopped-upgrade-971260                                                                                                                       │ stopped-upgrade-971260    │ jenkins │ v1.37.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-457612    │ jenkins │ v1.35.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:29 UTC │ 08 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-457612                                                                                                                       │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	│ start   │ -p pause-814452 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:35 UTC │
	│ start   │ -p pause-814452 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │ 08 Dec 25 01:35 UTC │
	│ pause   │ -p pause-814452 --alsologtostderr -v=5                                                                                                          │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:35:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:35:18.348791  998155 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:35:18.348929  998155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:18.348939  998155 out.go:374] Setting ErrFile to fd 2...
	I1208 01:35:18.348944  998155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:18.349264  998155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:35:18.349686  998155 out.go:368] Setting JSON to false
	I1208 01:35:18.350882  998155 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22651,"bootTime":1765135068,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:35:18.350964  998155 start.go:143] virtualization:  
	I1208 01:35:18.355974  998155 out.go:179] * [pause-814452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:35:18.359059  998155 notify.go:221] Checking for updates...
	I1208 01:35:18.359016  998155 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:35:18.362720  998155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:35:18.365943  998155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:35:18.368942  998155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:35:18.371897  998155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:35:18.374974  998155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:35:18.378366  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:18.379013  998155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:35:18.410827  998155 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:35:18.411009  998155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:18.470951  998155 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:35:18.460870733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:18.471060  998155 docker.go:319] overlay module found
	I1208 01:35:18.474206  998155 out.go:179] * Using the docker driver based on existing profile
	I1208 01:35:18.477061  998155 start.go:309] selected driver: docker
	I1208 01:35:18.477086  998155 start.go:927] validating driver "docker" against &{Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:18.477221  998155 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:35:18.477335  998155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:18.558409  998155 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:35:18.549358435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:18.558807  998155 cni.go:84] Creating CNI manager for ""
	I1208 01:35:18.558901  998155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:18.558948  998155 start.go:353] cluster config:
	{Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:18.564210  998155 out.go:179] * Starting "pause-814452" primary control-plane node in "pause-814452" cluster
	I1208 01:35:18.567113  998155 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:35:18.570124  998155 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:35:18.572996  998155 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:18.573047  998155 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:35:18.573058  998155 cache.go:65] Caching tarball of preloaded images
	I1208 01:35:18.573154  998155 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:35:18.573163  998155 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:35:18.573301  998155 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/config.json ...
	I1208 01:35:18.573533  998155 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:35:18.596962  998155 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:35:18.596987  998155 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:35:18.597002  998155 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:35:18.597036  998155 start.go:360] acquireMachinesLock for pause-814452: {Name:mk5f799fc081980318a78c850936f14295afd380 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:35:18.597107  998155 start.go:364] duration metric: took 36.39µs to acquireMachinesLock for "pause-814452"
	I1208 01:35:18.597132  998155 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:35:18.597137  998155 fix.go:54] fixHost starting: 
	I1208 01:35:18.597401  998155 cli_runner.go:164] Run: docker container inspect pause-814452 --format={{.State.Status}}
	I1208 01:35:18.615199  998155 fix.go:112] recreateIfNeeded on pause-814452: state=Running err=<nil>
	W1208 01:35:18.615233  998155 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:35:18.618581  998155 out.go:252] * Updating the running docker "pause-814452" container ...
	I1208 01:35:18.618619  998155 machine.go:94] provisionDockerMachine start ...
	I1208 01:35:18.618718  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.635380  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.635743  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.635767  998155 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:35:18.786324  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-814452
	
	I1208 01:35:18.786348  998155 ubuntu.go:182] provisioning hostname "pause-814452"
	I1208 01:35:18.786412  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.803388  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.803704  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.803720  998155 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-814452 && echo "pause-814452" | sudo tee /etc/hostname
	I1208 01:35:18.969114  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-814452
	
	I1208 01:35:18.969200  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.986812  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.987216  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.987235  998155 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-814452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-814452/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-814452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:35:19.143371  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:35:19.143398  998155 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:35:19.143431  998155 ubuntu.go:190] setting up certificates
	I1208 01:35:19.143440  998155 provision.go:84] configureAuth start
	I1208 01:35:19.143519  998155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-814452
	I1208 01:35:19.161785  998155 provision.go:143] copyHostCerts
	I1208 01:35:19.161865  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:35:19.161885  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:35:19.161962  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:35:19.162082  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:35:19.162094  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:35:19.162121  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:35:19.162176  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:35:19.162184  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:35:19.162208  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:35:19.162258  998155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.pause-814452 san=[127.0.0.1 192.168.85.2 localhost minikube pause-814452]
	I1208 01:35:19.318310  998155 provision.go:177] copyRemoteCerts
	I1208 01:35:19.318378  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:35:19.318437  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:19.337117  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:19.443091  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:35:19.461606  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:35:19.480147  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 01:35:19.498606  998155 provision.go:87] duration metric: took 355.14113ms to configureAuth
	I1208 01:35:19.498634  998155 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:35:19.498945  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:19.499062  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:19.518023  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:19.518339  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:19.518359  998155 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:35:24.904905  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:35:24.904935  998155 machine.go:97] duration metric: took 6.286307295s to provisionDockerMachine
	I1208 01:35:24.904948  998155 start.go:293] postStartSetup for "pause-814452" (driver="docker")
	I1208 01:35:24.904958  998155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:35:24.905026  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:35:24.905077  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:24.922146  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.028087  998155 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:35:25.032266  998155 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:35:25.032309  998155 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:35:25.032344  998155 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:35:25.032426  998155 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:35:25.032559  998155 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:35:25.032679  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:35:25.041085  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:25.061508  998155 start.go:296] duration metric: took 156.544789ms for postStartSetup
	I1208 01:35:25.061615  998155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:35:25.061672  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.079574  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.184310  998155 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:35:25.189959  998155 fix.go:56] duration metric: took 6.592814373s for fixHost
	I1208 01:35:25.189984  998155 start.go:83] releasing machines lock for "pause-814452", held for 6.592862874s
	I1208 01:35:25.190066  998155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-814452
	I1208 01:35:25.206606  998155 ssh_runner.go:195] Run: cat /version.json
	I1208 01:35:25.206623  998155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:35:25.206696  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.206707  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.225856  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.227358  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.338687  998155 ssh_runner.go:195] Run: systemctl --version
	I1208 01:35:25.437066  998155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:35:25.479134  998155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:35:25.483470  998155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:35:25.483538  998155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:35:25.491881  998155 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:35:25.491903  998155 start.go:496] detecting cgroup driver to use...
	I1208 01:35:25.491933  998155 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:35:25.491999  998155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:35:25.507559  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:35:25.520832  998155 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:35:25.520892  998155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:35:25.536844  998155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:35:25.550306  998155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:35:25.690383  998155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:35:25.836104  998155 docker.go:234] disabling docker service ...
	I1208 01:35:25.836206  998155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:35:25.851711  998155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:35:25.865421  998155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:35:26.018346  998155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:35:26.167876  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:35:26.181441  998155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:35:26.196549  998155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:35:26.196616  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.205153  998155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:35:26.205254  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.213865  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.222971  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.231880  998155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:35:26.240093  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.248810  998155 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.257049  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.265977  998155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:35:26.273803  998155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:35:26.281300  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:26.414963  998155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:35:26.637639  998155 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:35:26.637758  998155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:35:26.641684  998155 start.go:564] Will wait 60s for crictl version
	I1208 01:35:26.641749  998155 ssh_runner.go:195] Run: which crictl
	I1208 01:35:26.645210  998155 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:35:26.668790  998155 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:35:26.668908  998155 ssh_runner.go:195] Run: crio --version
	I1208 01:35:26.696960  998155 ssh_runner.go:195] Run: crio --version
	I1208 01:35:26.731601  998155 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:35:26.734583  998155 cli_runner.go:164] Run: docker network inspect pause-814452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:35:26.749968  998155 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:35:26.755825  998155 kubeadm.go:884] updating cluster {Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:35:26.755979  998155 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:26.756031  998155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:26.801001  998155 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:26.801021  998155 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:35:26.801076  998155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:26.833095  998155 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:26.833120  998155 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:35:26.833128  998155 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:35:26.833312  998155 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-814452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:35:26.833396  998155 ssh_runner.go:195] Run: crio config
	I1208 01:35:26.904923  998155 cni.go:84] Creating CNI manager for ""
	I1208 01:35:26.904947  998155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:26.904971  998155 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:35:26.904994  998155 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-814452 NodeName:pause-814452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:35:26.905124  998155 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-814452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:35:26.905200  998155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:35:26.912999  998155 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:35:26.913067  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:35:26.920355  998155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1208 01:35:26.933471  998155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:35:26.946214  998155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1208 01:35:26.959885  998155 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:35:26.964014  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:27.095276  998155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:35:27.108312  998155 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452 for IP: 192.168.85.2
	I1208 01:35:27.108335  998155 certs.go:195] generating shared ca certs ...
	I1208 01:35:27.108351  998155 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.108484  998155 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:35:27.108533  998155 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:35:27.108544  998155 certs.go:257] generating profile certs ...
	I1208 01:35:27.108627  998155 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key
	I1208 01:35:27.108696  998155 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.key.4f3aea3b
	I1208 01:35:27.108742  998155 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.key
	I1208 01:35:27.108862  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:35:27.108898  998155 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:35:27.108910  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:35:27.108939  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:35:27.108969  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:35:27.108997  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:35:27.109047  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:27.109640  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:35:27.130550  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:35:27.148723  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:35:27.165998  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:35:27.183368  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 01:35:27.202688  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:35:27.220209  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:35:27.238424  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 01:35:27.257444  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:35:27.274666  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:35:27.292058  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:35:27.309217  998155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:35:27.321131  998155 ssh_runner.go:195] Run: openssl version
	I1208 01:35:27.327638  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.334906  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:35:27.342284  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.345852  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.345936  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.386755  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:35:27.394060  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.401012  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:35:27.408006  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.411669  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.411729  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.453549  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:35:27.460728  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.467647  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:35:27.474734  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.478506  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.478618  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.520673  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:35:27.528183  998155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:35:27.532452  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:35:27.575119  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:35:27.616056  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:35:27.657758  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:35:27.698219  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:35:27.738749  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:35:27.779311  998155 kubeadm.go:401] StartCluster: {Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:27.779438  998155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:35:27.779502  998155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:35:27.806649  998155 cri.go:89] found id: "0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888"
	I1208 01:35:27.806721  998155 cri.go:89] found id: "c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc"
	I1208 01:35:27.806737  998155 cri.go:89] found id: "4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125"
	I1208 01:35:27.806757  998155 cri.go:89] found id: "a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a"
	I1208 01:35:27.806771  998155 cri.go:89] found id: "1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16"
	I1208 01:35:27.806796  998155 cri.go:89] found id: "fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c"
	I1208 01:35:27.806814  998155 cri.go:89] found id: "8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14"
	I1208 01:35:27.806830  998155 cri.go:89] found id: ""
	I1208 01:35:27.806928  998155 ssh_runner.go:195] Run: sudo runc list -f json
	W1208 01:35:27.817511  998155 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:27Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:35:27.817587  998155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:35:27.825197  998155 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:35:27.825258  998155 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:35:27.825327  998155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:35:27.833083  998155 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:35:27.833689  998155 kubeconfig.go:125] found "pause-814452" server: "https://192.168.85.2:8443"
	I1208 01:35:27.834493  998155 kapi.go:59] client config for pause-814452: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:35:27.835024  998155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 01:35:27.835047  998155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 01:35:27.835055  998155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 01:35:27.835061  998155 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 01:35:27.835069  998155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 01:35:27.835363  998155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:35:27.842931  998155 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:35:27.843019  998155 kubeadm.go:602] duration metric: took 17.740704ms to restartPrimaryControlPlane
	I1208 01:35:27.843035  998155 kubeadm.go:403] duration metric: took 63.734122ms to StartCluster
	I1208 01:35:27.843051  998155 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.843130  998155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:35:27.844013  998155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.844254  998155 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:35:27.844525  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:27.844595  998155 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:35:27.850788  998155 out.go:179] * Verifying Kubernetes components...
	I1208 01:35:27.850804  998155 out.go:179] * Enabled addons: 
	I1208 01:35:27.853627  998155 addons.go:530] duration metric: took 9.02619ms for enable addons: enabled=[]
	I1208 01:35:27.853669  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:27.991677  998155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:35:28.007003  998155 node_ready.go:35] waiting up to 6m0s for node "pause-814452" to be "Ready" ...
	I1208 01:35:32.542327  998155 node_ready.go:49] node "pause-814452" is "Ready"
	I1208 01:35:32.542359  998155 node_ready.go:38] duration metric: took 4.535302593s for node "pause-814452" to be "Ready" ...
	I1208 01:35:32.542374  998155 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:35:32.542434  998155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:35:32.560363  998155 api_server.go:72] duration metric: took 4.716071727s to wait for apiserver process to appear ...
	I1208 01:35:32.560386  998155 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:35:32.560404  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:32.574774  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1208 01:35:32.574805  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1208 01:35:33.061487  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:33.069770  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 01:35:33.069804  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 01:35:33.561173  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:33.574135  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 01:35:33.574162  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 01:35:34.060737  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:34.068928  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:35:34.070029  998155 api_server.go:141] control plane version: v1.34.2
	I1208 01:35:34.070065  998155 api_server.go:131] duration metric: took 1.509669793s to wait for apiserver health ...
	I1208 01:35:34.070074  998155 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:35:34.073666  998155 system_pods.go:59] 7 kube-system pods found
	I1208 01:35:34.073700  998155 system_pods.go:61] "coredns-66bc5c9577-2sqj2" [d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98] Running
	I1208 01:35:34.073708  998155 system_pods.go:61] "etcd-pause-814452" [5935b236-05dd-4a50-8588-e915e6310e70] Running
	I1208 01:35:34.073713  998155 system_pods.go:61] "kindnet-ckhk6" [e09c741f-bc11-4b44-bd32-16d50b32078a] Running
	I1208 01:35:34.073716  998155 system_pods.go:61] "kube-apiserver-pause-814452" [21d3912f-469f-4184-9f64-61be846921d6] Running
	I1208 01:35:34.073721  998155 system_pods.go:61] "kube-controller-manager-pause-814452" [116a88a8-26c9-4620-a375-d8495ac0fc4f] Running
	I1208 01:35:34.073725  998155 system_pods.go:61] "kube-proxy-r58c9" [fadb8bf1-b94e-45ea-8bd9-0b456753562e] Running
	I1208 01:35:34.073734  998155 system_pods.go:61] "kube-scheduler-pause-814452" [8c20c133-c24c-4990-beae-f2e7d56795eb] Running
	I1208 01:35:34.073739  998155 system_pods.go:74] duration metric: took 3.660461ms to wait for pod list to return data ...
	I1208 01:35:34.073756  998155 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:35:34.076596  998155 default_sa.go:45] found service account: "default"
	I1208 01:35:34.076625  998155 default_sa.go:55] duration metric: took 2.863093ms for default service account to be created ...
	I1208 01:35:34.076644  998155 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:35:34.079876  998155 system_pods.go:86] 7 kube-system pods found
	I1208 01:35:34.079906  998155 system_pods.go:89] "coredns-66bc5c9577-2sqj2" [d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98] Running
	I1208 01:35:34.079913  998155 system_pods.go:89] "etcd-pause-814452" [5935b236-05dd-4a50-8588-e915e6310e70] Running
	I1208 01:35:34.079918  998155 system_pods.go:89] "kindnet-ckhk6" [e09c741f-bc11-4b44-bd32-16d50b32078a] Running
	I1208 01:35:34.079923  998155 system_pods.go:89] "kube-apiserver-pause-814452" [21d3912f-469f-4184-9f64-61be846921d6] Running
	I1208 01:35:34.079928  998155 system_pods.go:89] "kube-controller-manager-pause-814452" [116a88a8-26c9-4620-a375-d8495ac0fc4f] Running
	I1208 01:35:34.079932  998155 system_pods.go:89] "kube-proxy-r58c9" [fadb8bf1-b94e-45ea-8bd9-0b456753562e] Running
	I1208 01:35:34.079936  998155 system_pods.go:89] "kube-scheduler-pause-814452" [8c20c133-c24c-4990-beae-f2e7d56795eb] Running
	I1208 01:35:34.079942  998155 system_pods.go:126] duration metric: took 3.292809ms to wait for k8s-apps to be running ...
	I1208 01:35:34.079954  998155 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:35:34.080015  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:35:34.093852  998155 system_svc.go:56] duration metric: took 13.888429ms WaitForService to wait for kubelet
	I1208 01:35:34.093882  998155 kubeadm.go:587] duration metric: took 6.249596721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:35:34.093902  998155 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:35:34.097129  998155 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:35:34.097162  998155 node_conditions.go:123] node cpu capacity is 2
	I1208 01:35:34.097176  998155 node_conditions.go:105] duration metric: took 3.268767ms to run NodePressure ...
	I1208 01:35:34.097190  998155 start.go:242] waiting for startup goroutines ...
	I1208 01:35:34.097197  998155 start.go:247] waiting for cluster config update ...
	I1208 01:35:34.097205  998155 start.go:256] writing updated cluster config ...
	I1208 01:35:34.097526  998155 ssh_runner.go:195] Run: rm -f paused
	I1208 01:35:34.101940  998155 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:35:34.102649  998155 kapi.go:59] client config for pause-814452: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:35:34.105983  998155 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2sqj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.111370  998155 pod_ready.go:94] pod "coredns-66bc5c9577-2sqj2" is "Ready"
	I1208 01:35:34.111402  998155 pod_ready.go:86] duration metric: took 5.390509ms for pod "coredns-66bc5c9577-2sqj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.114022  998155 pod_ready.go:83] waiting for pod "etcd-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.118609  998155 pod_ready.go:94] pod "etcd-pause-814452" is "Ready"
	I1208 01:35:34.118683  998155 pod_ready.go:86] duration metric: took 4.624411ms for pod "etcd-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.120961  998155 pod_ready.go:83] waiting for pod "kube-apiserver-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.125599  998155 pod_ready.go:94] pod "kube-apiserver-pause-814452" is "Ready"
	I1208 01:35:34.125630  998155 pod_ready.go:86] duration metric: took 4.644243ms for pod "kube-apiserver-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.128200  998155 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.506304  998155 pod_ready.go:94] pod "kube-controller-manager-pause-814452" is "Ready"
	I1208 01:35:34.506332  998155 pod_ready.go:86] duration metric: took 378.107079ms for pod "kube-controller-manager-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.706349  998155 pod_ready.go:83] waiting for pod "kube-proxy-r58c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.106772  998155 pod_ready.go:94] pod "kube-proxy-r58c9" is "Ready"
	I1208 01:35:35.106804  998155 pod_ready.go:86] duration metric: took 400.426474ms for pod "kube-proxy-r58c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.305724  998155 pod_ready.go:83] waiting for pod "kube-scheduler-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.706723  998155 pod_ready.go:94] pod "kube-scheduler-pause-814452" is "Ready"
	I1208 01:35:35.706754  998155 pod_ready.go:86] duration metric: took 400.994692ms for pod "kube-scheduler-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.706767  998155 pod_ready.go:40] duration metric: took 1.604787064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:35:35.766402  998155 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:35:35.771860  998155 out.go:179] * Done! kubectl is now configured to use "pause-814452" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.332727306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.34376029Z" level=info msg="Created container 77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8: kube-system/kube-controller-manager-pause-814452/kube-controller-manager" id=aea48c0f-37dc-4ab3-b988-0fa92f61ffa8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.345540481Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-2sqj2/coredns" id=41eecfee-6cc3-48a6-aacd-47df591a6453 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.34606404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.351590174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.352301986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.356845953Z" level=info msg="Started container" PID=2362 containerID=f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff description=kube-system/etcd-pause-814452/etcd id=fc7bda2b-fdab-4536-bdfa-d81180818c18 name=/runtime.v1.RuntimeService/StartContainer sandboxID=929cbc05654b3af8b0be6ade6e61407830dbe4eec6630b19b209c7ca804321b3
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.357828521Z" level=info msg="Starting container: 77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8" id=55338915-11f6-41a4-b9bc-26e970529395 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.363774171Z" level=info msg="Started container" PID=2357 containerID=77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8 description=kube-system/kube-controller-manager-pause-814452/kube-controller-manager id=55338915-11f6-41a4-b9bc-26e970529395 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4892415340a4f1e1587186ca0df119bf6eea4d894c6f1339a3f2d08d5e17aea3
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.377905492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.379231179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.409993676Z" level=info msg="Created container a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7: kube-system/kindnet-ckhk6/kindnet-cni" id=9edd088f-7a35-45a2-8d4e-8017c1806b9e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.411197359Z" level=info msg="Starting container: a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7" id=86b774b3-1c48-4601-969b-08fb4bf5a8ae name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.413245327Z" level=info msg="Started container" PID=2398 containerID=a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7 description=kube-system/kindnet-ckhk6/kindnet-cni id=86b774b3-1c48-4601-969b-08fb4bf5a8ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=08d9d89026b5e2a01cd05824afb9faa1929a674f55fdb66b1f4daf5004373517
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.422778379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.423599271Z" level=info msg="Created container 0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652: kube-system/kube-scheduler-pause-814452/kube-scheduler" id=68b2525e-013b-404d-9e6d-0e1577333d2a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.424040475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.424334353Z" level=info msg="Starting container: 0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652" id=312e197a-bb1f-43b0-be51-318c8ced4601 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.439360077Z" level=info msg="Started container" PID=2385 containerID=0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652 description=kube-system/kube-scheduler-pause-814452/kube-scheduler id=312e197a-bb1f-43b0-be51-318c8ced4601 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7a1816b365da6baac6edff7e4e4d01ea75c51176e2d67142e08a365ddc6a5aa
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.460237526Z" level=info msg="Created container 5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70: kube-system/kube-proxy-r58c9/kube-proxy" id=422697aa-50e8-4d85-ac78-33744e00e66c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.460888874Z" level=info msg="Starting container: 5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70" id=4c90b6b0-8bcf-44bb-95c4-a08c5d8da89a name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.467071689Z" level=info msg="Created container 516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9: kube-system/coredns-66bc5c9577-2sqj2/coredns" id=41eecfee-6cc3-48a6-aacd-47df591a6453 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.467778643Z" level=info msg="Starting container: 516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9" id=9a7beb22-7416-464f-84ed-a3cf7fb0f78e name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.4682933Z" level=info msg="Started container" PID=2401 containerID=5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70 description=kube-system/kube-proxy-r58c9/kube-proxy id=4c90b6b0-8bcf-44bb-95c4-a08c5d8da89a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c0a0ede1ff934c8af1b1fab6636e1e02c48aaec5e65285df6aadb6d4a2965e
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.473610142Z" level=info msg="Started container" PID=2416 containerID=516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9 description=kube-system/coredns-66bc5c9577-2sqj2/coredns id=9a7beb22-7416-464f-84ed-a3cf7fb0f78e name=/runtime.v1.RuntimeService/StartContainer sandboxID=311d58bbe8eb4a5f187b30137e729ceedd99614db2a225841568e5804d0f8146
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	516e6700d90fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   10 seconds ago       Running             coredns                   1                   311d58bbe8eb4       coredns-66bc5c9577-2sqj2               kube-system
	a3680b7de4124       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   10 seconds ago       Running             kindnet-cni               1                   08d9d89026b5e       kindnet-ckhk6                          kube-system
	5e4a8988656de       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   10 seconds ago       Running             kube-proxy                1                   c3c0a0ede1ff9       kube-proxy-r58c9                       kube-system
	0ff169efe568f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   10 seconds ago       Running             kube-scheduler            1                   c7a1816b365da       kube-scheduler-pause-814452            kube-system
	f519779927d0c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   10 seconds ago       Running             etcd                      1                   929cbc05654b3       etcd-pause-814452                      kube-system
	72fae15be4bbe       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   10 seconds ago       Running             kube-apiserver            1                   70beef3348fc0       kube-apiserver-pause-814452            kube-system
	77c31b6f8ace1       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   10 seconds ago       Running             kube-controller-manager   1                   4892415340a4f       kube-controller-manager-pause-814452   kube-system
	0c48e24a4f283       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Exited              coredns                   0                   311d58bbe8eb4       coredns-66bc5c9577-2sqj2               kube-system
	c5bcaed767ee2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   c3c0a0ede1ff9       kube-proxy-r58c9                       kube-system
	4c6686bd89422       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   08d9d89026b5e       kindnet-ckhk6                          kube-system
	a6c51a18766ff       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   4892415340a4f       kube-controller-manager-pause-814452   kube-system
	1ae0f5b191d45       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   70beef3348fc0       kube-apiserver-pause-814452            kube-system
	fc971b2b759f0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   929cbc05654b3       etcd-pause-814452                      kube-system
	8cd16f9edc4e1       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   c7a1816b365da       kube-scheduler-pause-814452            kube-system
	
	
	==> coredns [0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35011 - 40916 "HINFO IN 4410955698505796276.6222712833875172857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029297332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59534 - 8451 "HINFO IN 3564368914563567705.1534246744001046578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02313337s
	
	
	==> describe nodes <==
	Name:               pause-814452
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-814452
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=pause-814452
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_34_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-814452
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:35:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-814452
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                79d1954f-3523-43b9-be94-ccedb1953bc7
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2sqj2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     64s
	  kube-system                 etcd-pause-814452                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         69s
	  kube-system                 kindnet-ckhk6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      64s
	  kube-system                 kube-apiserver-pause-814452             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-pause-814452    200m (10%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-proxy-r58c9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-scheduler-pause-814452             100m (5%)     0 (0%)      0 (0%)           0 (0%)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 63s   kube-proxy       
	  Normal   Starting                 4s    kube-proxy       
	  Normal   Starting                 70s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s   kubelet          Node pause-814452 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s   kubelet          Node pause-814452 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s   kubelet          Node pause-814452 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           65s   node-controller  Node pause-814452 event: Registered Node pause-814452 in Controller
	  Normal   NodeReady                24s   kubelet          Node pause-814452 status is now: NodeReady
	  Normal   RegisteredNode           3s    node-controller  Node pause-814452 event: Registered Node pause-814452 in Controller
	
	
	==> dmesg <==
	[Dec 8 00:59] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:00] overlayfs: idmapped layers are currently not supported
	[  +3.041176] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:02] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:03] overlayfs: idmapped layers are currently not supported
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff] <==
	{"level":"warn","ts":"2025-12-08T01:35:31.221623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.243013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.264576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.316101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.341059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.362221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.379841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.396643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.415455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.442026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.455527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.472441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.489745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.506809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.536315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.549982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.567925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.584734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.608035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.621150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.643878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.671120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.696956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.714097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.807716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	
	
	==> etcd [fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c] <==
	{"level":"warn","ts":"2025-12-08T01:34:25.324766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.348093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.366540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.409420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.434895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.442908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.515537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T01:35:19.696997Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-08T01:35:19.697056Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-814452","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-08T01:35:19.699424Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T01:35:19.833898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T01:35:19.835333Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.835381Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-08T01:35:19.835447Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-08T01:35:19.835465Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835603Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T01:35:19.835637Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835707Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T01:35:19.835736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.838672Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-08T01:35:19.838743Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.838809Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-08T01:35:19.838867Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-814452","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 01:35:38 up  6:17,  0 user,  load average: 2.44, 1.66, 1.72
	Linux pause-814452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125] <==
	I1208 01:34:34.639106       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:34:34.639340       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:34:34.639471       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:34:34.639488       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:34:34.639498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:34:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:34:34.836591       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:34:34.840074       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:34:34.840179       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:34:34.840343       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:35:04.837086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:35:04.837087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:35:04.837131       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1208 01:35:04.837206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1208 01:35:05.840584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:35:05.840627       1 metrics.go:72] Registering metrics
	I1208 01:35:05.840702       1 controller.go:711] "Syncing nftables rules"
	I1208 01:35:14.836123       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:35:14.836180       1 main.go:301] handling current node
	
	
	==> kindnet [a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7] <==
	I1208 01:35:28.524448       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:35:28.526965       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:35:28.528668       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:35:28.528745       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:35:28.528780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:35:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:35:28.680059       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:35:28.721582       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:35:28.721682       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:35:28.723637       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1208 01:35:32.722804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:35:32.723694       1 metrics.go:72] Registering metrics
	I1208 01:35:32.723771       1 controller.go:711] "Syncing nftables rules"
	I1208 01:35:38.682924       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:35:38.682974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16] <==
	W1208 01:35:19.715372       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715433       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715490       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715546       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715614       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716058       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716119       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716165       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716325       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716375       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716420       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716469       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716515       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716565       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716615       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716662       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716707       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716762       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716810       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716854       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716898       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716945       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717051       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717851       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717924       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [72fae15be4bbe10f53114796f4ea74adf286d63828cec603838a2da804289607] <==
	I1208 01:35:32.336047       1 cluster_authentication_trust_controller.go:459] Starting cluster_authentication_trust_controller controller
	I1208 01:35:32.472318       1 shared_informer.go:349] "Waiting for caches to sync" controller="cluster_authentication_trust_controller"
	I1208 01:35:32.612195       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1208 01:35:32.621755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1208 01:35:32.621793       1 policy_source.go:240] refreshing policies
	I1208 01:35:32.637275       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:35:32.643082       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:35:32.655725       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:35:32.664909       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 01:35:32.665066       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 01:35:32.672065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1208 01:35:32.672076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:35:32.672213       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:35:32.672236       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 01:35:32.672344       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:35:32.672094       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:35:32.672109       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1208 01:35:32.672686       1 aggregator.go:171] initial CRD sync complete...
	I1208 01:35:32.672725       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 01:35:32.672768       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:35:32.672804       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:35:32.673155       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1208 01:35:32.679268       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:35:33.346020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:35:34.591190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8] <==
	I1208 01:35:36.003497       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1208 01:35:36.010319       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:35:36.012658       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1208 01:35:36.015032       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 01:35:36.017033       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:35:36.019820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1208 01:35:36.024381       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 01:35:36.024514       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:35:36.024665       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:35:36.031525       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 01:35:36.034739       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 01:35:36.037492       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1208 01:35:36.040983       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:35:36.043016       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:35:36.043140       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:35:36.043370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:35:36.043649       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:35:36.043708       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:35:36.043751       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1208 01:35:36.044737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:35:36.044822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:35:36.044978       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 01:35:36.047536       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:35:36.057053       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 01:35:36.068778       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a] <==
	I1208 01:34:33.408094       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 01:34:33.408685       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:34:33.407631       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1208 01:34:33.409732       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:34:33.410065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:34:33.411283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:34:33.411703       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1208 01:34:33.414064       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1208 01:34:33.414143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:34:33.414156       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1208 01:34:33.414241       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1208 01:34:33.414260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1208 01:34:33.414266       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 01:34:33.414407       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:34:33.414543       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:34:33.414642       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-814452"
	I1208 01:34:33.414711       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1208 01:34:33.417467       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 01:34:33.420104       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:34:33.420133       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:34:33.420141       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:34:33.420588       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:34:33.432525       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-814452" podCIDRs=["10.244.0.0/24"]
	I1208 01:34:33.432623       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:35:18.422400       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70] <==
	I1208 01:35:28.593488       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:35:30.043982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1208 01:35:32.601637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-814452\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1208 01:35:33.716725       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:35:33.716772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:35:33.716863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:35:33.768315       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:35:33.768374       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:35:33.775010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:35:33.775342       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:35:33.775366       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:35:33.777014       1 config.go:200] "Starting service config controller"
	I1208 01:35:33.777041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:35:33.777060       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:35:33.777064       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:35:33.777089       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:35:33.777094       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:35:33.778028       1 config.go:309] "Starting node config controller"
	I1208 01:35:33.778061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:35:33.778068       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:35:33.877563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:35:33.877572       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:35:33.877610       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc] <==
	I1208 01:34:34.714126       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:34:34.856474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:34:34.980061       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:34:34.980094       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:34:34.980169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:34:35.044186       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:34:35.044313       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:34:35.049500       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:34:35.049859       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:34:35.050078       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:34:35.066153       1 config.go:200] "Starting service config controller"
	I1208 01:34:35.066257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:34:35.066314       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:34:35.066343       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:34:35.066389       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:34:35.066418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:34:35.067295       1 config.go:309] "Starting node config controller"
	I1208 01:34:35.067364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:34:35.067394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:34:35.167123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:34:35.167166       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:34:35.167205       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652] <==
	I1208 01:35:30.807403       1 serving.go:386] Generated self-signed cert in-memory
	W1208 01:35:32.439063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 01:35:32.439190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 01:35:32.439226       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 01:35:32.439382       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 01:35:32.590087       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:35:32.590552       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:35:32.602759       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:35:32.603039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:32.603099       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:32.603144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:35:32.703413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14] <==
	E1208 01:34:26.405195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:34:26.405257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 01:34:26.405315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 01:34:26.405372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 01:34:26.405422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 01:34:26.405479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 01:34:26.405573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 01:34:26.405651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 01:34:26.408599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 01:34:27.226002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 01:34:27.291361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:34:27.299220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 01:34:27.318504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 01:34:27.319715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 01:34:27.343488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 01:34:27.521651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 01:34:27.575865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 01:34:27.588387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1208 01:34:27.958451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:19.690768       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1208 01:35:19.690797       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1208 01:35:19.690818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1208 01:35:19.691084       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:19.691389       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1208 01:35:19.691408       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="e09c741f-bc11-4b44-bd32-16d50b32078a" pod="kube-system/kindnet-ckhk6"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.593154    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-proxy-r58c9" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="fadb8bf1-b94e-45ea-8bd9-0b456753562e" pod="kube-system/kube-proxy-r58c9"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.594094    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "coredns-66bc5c9577-2sqj2" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98" pod="kube-system/coredns-66bc5c9577-2sqj2"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.595141    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "etcd-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="306159274b6322f70291775224f95d81" pod="kube-system/etcd-pause-814452"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.596231    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-scheduler-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="00c12c8f188df6cf0bae7798770b792d" pod="kube-system/kube-scheduler-pause-814452"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.597116    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-apiserver-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="ab433440dd54af3441c62dc268aef562" pod="kube-system/kube-apiserver-pause-814452"
	Dec 08 01:35:36 pause-814452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:35:36 pause-814452 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:35:36 pause-814452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-814452 -n pause-814452
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-814452 -n pause-814452: exit status 2 (349.476031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-814452 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-814452
helpers_test.go:243: (dbg) docker inspect pause-814452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688",
	        "Created": "2025-12-08T01:34:05.276754584Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:34:05.347890283Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/hostname",
	        "HostsPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/hosts",
	        "LogPath": "/var/lib/docker/containers/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688/8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688-json.log",
	        "Name": "/pause-814452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-814452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-814452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8110bb7c02fc853700e71acc2068ae24170efb575cacd437648838c9085ae688",
	                "LowerDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa2205a1fd5f7f3912278be57a904a7b8adcc2651a943d0b3d77acc38ae55cf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-814452",
	                "Source": "/var/lib/docker/volumes/pause-814452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-814452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-814452",
	                "name.minikube.sigs.k8s.io": "pause-814452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d776182e7a89fc3a1a42b3db5b334e4ddebf426833000a2bab88e7497d5c8ad6",
	            "SandboxKey": "/var/run/docker/netns/d776182e7a89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33751"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-814452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:57:ee:ff:94:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2ba844c4593cb0ab79941bf15dcda381823827aca81dbb7d6fcf6500ea56fbd",
	                    "EndpointID": "343fc667fb1f7a64e531d09ab3425f1407b8a0466d24ca539f00436a108757e7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-814452",
	                        "8110bb7c02fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-814452 -n pause-814452
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-814452 -n pause-814452: exit status 2 (362.227969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-814452 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-814452 logs -n 25: (1.374914444s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-526754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:21 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p missing-upgrade-156445 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-156445    │ jenkins │ v1.35.0 │ 08 Dec 25 01:21 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p missing-upgrade-156445 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-156445    │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:23 UTC │
	│ delete  │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:22 UTC │
	│ start   │ -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:22 UTC │ 08 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-526754 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ stop    │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p NoKubernetes-526754 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ ssh     │ -p NoKubernetes-526754 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ delete  │ -p NoKubernetes-526754                                                                                                                          │ NoKubernetes-526754       │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ delete  │ -p missing-upgrade-156445                                                                                                                       │ missing-upgrade-156445    │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p stopped-upgrade-971260 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-971260    │ jenkins │ v1.35.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:24 UTC │
	│ stop    │ -p kubernetes-upgrade-386622                                                                                                                    │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │ 08 Dec 25 01:23 UTC │
	│ start   │ -p kubernetes-upgrade-386622 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-386622 │ jenkins │ v1.37.0 │ 08 Dec 25 01:23 UTC │                     │
	│ stop    │ stopped-upgrade-971260 stop                                                                                                                     │ stopped-upgrade-971260    │ jenkins │ v1.35.0 │ 08 Dec 25 01:24 UTC │ 08 Dec 25 01:24 UTC │
	│ start   │ -p stopped-upgrade-971260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-971260    │ jenkins │ v1.37.0 │ 08 Dec 25 01:24 UTC │ 08 Dec 25 01:28 UTC │
	│ delete  │ -p stopped-upgrade-971260                                                                                                                       │ stopped-upgrade-971260    │ jenkins │ v1.37.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:28 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-457612    │ jenkins │ v1.35.0 │ 08 Dec 25 01:28 UTC │ 08 Dec 25 01:29 UTC │
	│ start   │ -p running-upgrade-457612 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:29 UTC │ 08 Dec 25 01:33 UTC │
	│ delete  │ -p running-upgrade-457612                                                                                                                       │ running-upgrade-457612    │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	│ start   │ -p pause-814452 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:35 UTC │
	│ start   │ -p pause-814452 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │ 08 Dec 25 01:35 UTC │
	│ pause   │ -p pause-814452 --alsologtostderr -v=5                                                                                                          │ pause-814452              │ jenkins │ v1.37.0 │ 08 Dec 25 01:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:35:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:35:18.348791  998155 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:35:18.348929  998155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:18.348939  998155 out.go:374] Setting ErrFile to fd 2...
	I1208 01:35:18.348944  998155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:35:18.349264  998155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:35:18.349686  998155 out.go:368] Setting JSON to false
	I1208 01:35:18.350882  998155 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22651,"bootTime":1765135068,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:35:18.350964  998155 start.go:143] virtualization:  
	I1208 01:35:18.355974  998155 out.go:179] * [pause-814452] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:35:18.359059  998155 notify.go:221] Checking for updates...
	I1208 01:35:18.359016  998155 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:35:18.362720  998155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:35:18.365943  998155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:35:18.368942  998155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:35:18.371897  998155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:35:18.374974  998155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:35:18.378366  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:18.379013  998155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:35:18.410827  998155 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:35:18.411009  998155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:18.470951  998155 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:35:18.460870733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:18.471060  998155 docker.go:319] overlay module found
	I1208 01:35:18.474206  998155 out.go:179] * Using the docker driver based on existing profile
	I1208 01:35:18.477061  998155 start.go:309] selected driver: docker
	I1208 01:35:18.477086  998155 start.go:927] validating driver "docker" against &{Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:18.477221  998155 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:35:18.477335  998155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:35:18.558409  998155 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:35:18.549358435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:35:18.558807  998155 cni.go:84] Creating CNI manager for ""
	I1208 01:35:18.558901  998155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:18.558948  998155 start.go:353] cluster config:
	{Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:18.564210  998155 out.go:179] * Starting "pause-814452" primary control-plane node in "pause-814452" cluster
	I1208 01:35:18.567113  998155 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:35:18.570124  998155 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:35:18.572996  998155 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:18.573047  998155 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:35:18.573058  998155 cache.go:65] Caching tarball of preloaded images
	I1208 01:35:18.573154  998155 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:35:18.573163  998155 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:35:18.573301  998155 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/config.json ...
	I1208 01:35:18.573533  998155 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:35:18.596962  998155 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:35:18.596987  998155 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:35:18.597002  998155 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:35:18.597036  998155 start.go:360] acquireMachinesLock for pause-814452: {Name:mk5f799fc081980318a78c850936f14295afd380 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:35:18.597107  998155 start.go:364] duration metric: took 36.39µs to acquireMachinesLock for "pause-814452"
	I1208 01:35:18.597132  998155 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:35:18.597137  998155 fix.go:54] fixHost starting: 
	I1208 01:35:18.597401  998155 cli_runner.go:164] Run: docker container inspect pause-814452 --format={{.State.Status}}
	I1208 01:35:18.615199  998155 fix.go:112] recreateIfNeeded on pause-814452: state=Running err=<nil>
	W1208 01:35:18.615233  998155 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:35:18.618581  998155 out.go:252] * Updating the running docker "pause-814452" container ...
	I1208 01:35:18.618619  998155 machine.go:94] provisionDockerMachine start ...
	I1208 01:35:18.618718  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.635380  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.635743  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.635767  998155 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:35:18.786324  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-814452
	
	I1208 01:35:18.786348  998155 ubuntu.go:182] provisioning hostname "pause-814452"
	I1208 01:35:18.786412  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.803388  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.803704  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.803720  998155 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-814452 && echo "pause-814452" | sudo tee /etc/hostname
	I1208 01:35:18.969114  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-814452
	
	I1208 01:35:18.969200  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:18.986812  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:18.987216  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:18.987235  998155 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-814452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-814452/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-814452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:35:19.143371  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:35:19.143398  998155 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:35:19.143431  998155 ubuntu.go:190] setting up certificates
	I1208 01:35:19.143440  998155 provision.go:84] configureAuth start
	I1208 01:35:19.143519  998155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-814452
	I1208 01:35:19.161785  998155 provision.go:143] copyHostCerts
	I1208 01:35:19.161865  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:35:19.161885  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:35:19.161962  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:35:19.162082  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:35:19.162094  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:35:19.162121  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:35:19.162176  998155 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:35:19.162184  998155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:35:19.162208  998155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:35:19.162258  998155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.pause-814452 san=[127.0.0.1 192.168.85.2 localhost minikube pause-814452]
	I1208 01:35:19.318310  998155 provision.go:177] copyRemoteCerts
	I1208 01:35:19.318378  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:35:19.318437  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:19.337117  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:19.443091  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:35:19.461606  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:35:19.480147  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 01:35:19.498606  998155 provision.go:87] duration metric: took 355.14113ms to configureAuth
	I1208 01:35:19.498634  998155 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:35:19.498945  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:19.499062  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:19.518023  998155 main.go:143] libmachine: Using SSH client type: native
	I1208 01:35:19.518339  998155 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33747 <nil> <nil>}
	I1208 01:35:19.518359  998155 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:35:24.904905  998155 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:35:24.904935  998155 machine.go:97] duration metric: took 6.286307295s to provisionDockerMachine
	I1208 01:35:24.904948  998155 start.go:293] postStartSetup for "pause-814452" (driver="docker")
	I1208 01:35:24.904958  998155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:35:24.905026  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:35:24.905077  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:24.922146  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.028087  998155 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:35:25.032266  998155 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:35:25.032309  998155 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:35:25.032344  998155 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:35:25.032426  998155 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:35:25.032559  998155 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:35:25.032679  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:35:25.041085  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:25.061508  998155 start.go:296] duration metric: took 156.544789ms for postStartSetup
	I1208 01:35:25.061615  998155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:35:25.061672  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.079574  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.184310  998155 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:35:25.189959  998155 fix.go:56] duration metric: took 6.592814373s for fixHost
	I1208 01:35:25.189984  998155 start.go:83] releasing machines lock for "pause-814452", held for 6.592862874s
	I1208 01:35:25.190066  998155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-814452
	I1208 01:35:25.206606  998155 ssh_runner.go:195] Run: cat /version.json
	I1208 01:35:25.206623  998155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:35:25.206696  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.206707  998155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-814452
	I1208 01:35:25.225856  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.227358  998155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33747 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/pause-814452/id_rsa Username:docker}
	I1208 01:35:25.338687  998155 ssh_runner.go:195] Run: systemctl --version
	I1208 01:35:25.437066  998155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:35:25.479134  998155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:35:25.483470  998155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:35:25.483538  998155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:35:25.491881  998155 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:35:25.491903  998155 start.go:496] detecting cgroup driver to use...
	I1208 01:35:25.491933  998155 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:35:25.491999  998155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:35:25.507559  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:35:25.520832  998155 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:35:25.520892  998155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:35:25.536844  998155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:35:25.550306  998155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:35:25.690383  998155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:35:25.836104  998155 docker.go:234] disabling docker service ...
	I1208 01:35:25.836206  998155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:35:25.851711  998155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:35:25.865421  998155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:35:26.018346  998155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:35:26.167876  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:35:26.181441  998155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:35:26.196549  998155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:35:26.196616  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.205153  998155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:35:26.205254  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.213865  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.222971  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.231880  998155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:35:26.240093  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.248810  998155 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.257049  998155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:35:26.265977  998155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:35:26.273803  998155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:35:26.281300  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:26.414963  998155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:35:26.637639  998155 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:35:26.637758  998155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:35:26.641684  998155 start.go:564] Will wait 60s for crictl version
	I1208 01:35:26.641749  998155 ssh_runner.go:195] Run: which crictl
	I1208 01:35:26.645210  998155 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:35:26.668790  998155 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:35:26.668908  998155 ssh_runner.go:195] Run: crio --version
	I1208 01:35:26.696960  998155 ssh_runner.go:195] Run: crio --version
	I1208 01:35:26.731601  998155 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:35:26.734583  998155 cli_runner.go:164] Run: docker network inspect pause-814452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:35:26.749968  998155 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:35:26.755825  998155 kubeadm.go:884] updating cluster {Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:35:26.755979  998155 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:35:26.756031  998155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:26.801001  998155 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:26.801021  998155 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:35:26.801076  998155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:35:26.833095  998155 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:35:26.833120  998155 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:35:26.833128  998155 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:35:26.833312  998155 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-814452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:35:26.833396  998155 ssh_runner.go:195] Run: crio config
	I1208 01:35:26.904923  998155 cni.go:84] Creating CNI manager for ""
	I1208 01:35:26.904947  998155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:35:26.904971  998155 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:35:26.904994  998155 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-814452 NodeName:pause-814452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:35:26.905124  998155 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-814452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:35:26.905200  998155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:35:26.912999  998155 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:35:26.913067  998155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:35:26.920355  998155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1208 01:35:26.933471  998155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:35:26.946214  998155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1208 01:35:26.959885  998155 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:35:26.964014  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:27.095276  998155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:35:27.108312  998155 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452 for IP: 192.168.85.2
	I1208 01:35:27.108335  998155 certs.go:195] generating shared ca certs ...
	I1208 01:35:27.108351  998155 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.108484  998155 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:35:27.108533  998155 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:35:27.108544  998155 certs.go:257] generating profile certs ...
	I1208 01:35:27.108627  998155 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key
	I1208 01:35:27.108696  998155 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.key.4f3aea3b
	I1208 01:35:27.108742  998155 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.key
	I1208 01:35:27.108862  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:35:27.108898  998155 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:35:27.108910  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:35:27.108939  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:35:27.108969  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:35:27.108997  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:35:27.109047  998155 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:35:27.109640  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:35:27.130550  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:35:27.148723  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:35:27.165998  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:35:27.183368  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 01:35:27.202688  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:35:27.220209  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:35:27.238424  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 01:35:27.257444  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:35:27.274666  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:35:27.292058  998155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:35:27.309217  998155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:35:27.321131  998155 ssh_runner.go:195] Run: openssl version
	I1208 01:35:27.327638  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.334906  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:35:27.342284  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.345852  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.345936  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:35:27.386755  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:35:27.394060  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.401012  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:35:27.408006  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.411669  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.411729  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:35:27.453549  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:35:27.460728  998155 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.467647  998155 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:35:27.474734  998155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.478506  998155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.478618  998155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:35:27.520673  998155 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:35:27.528183  998155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:35:27.532452  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:35:27.575119  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:35:27.616056  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:35:27.657758  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:35:27.698219  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:35:27.738749  998155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:35:27.779311  998155 kubeadm.go:401] StartCluster: {Name:pause-814452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-814452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:35:27.779438  998155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:35:27.779502  998155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:35:27.806649  998155 cri.go:89] found id: "0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888"
	I1208 01:35:27.806721  998155 cri.go:89] found id: "c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc"
	I1208 01:35:27.806737  998155 cri.go:89] found id: "4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125"
	I1208 01:35:27.806757  998155 cri.go:89] found id: "a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a"
	I1208 01:35:27.806771  998155 cri.go:89] found id: "1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16"
	I1208 01:35:27.806796  998155 cri.go:89] found id: "fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c"
	I1208 01:35:27.806814  998155 cri.go:89] found id: "8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14"
	I1208 01:35:27.806830  998155 cri.go:89] found id: ""
	I1208 01:35:27.806928  998155 ssh_runner.go:195] Run: sudo runc list -f json
	W1208 01:35:27.817511  998155 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:35:27Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:35:27.817587  998155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:35:27.825197  998155 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:35:27.825258  998155 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:35:27.825327  998155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:35:27.833083  998155 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:35:27.833689  998155 kubeconfig.go:125] found "pause-814452" server: "https://192.168.85.2:8443"
	I1208 01:35:27.834493  998155 kapi.go:59] client config for pause-814452: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:35:27.835024  998155 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 01:35:27.835047  998155 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 01:35:27.835055  998155 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 01:35:27.835061  998155 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 01:35:27.835069  998155 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 01:35:27.835363  998155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:35:27.842931  998155 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:35:27.843019  998155 kubeadm.go:602] duration metric: took 17.740704ms to restartPrimaryControlPlane
	I1208 01:35:27.843035  998155 kubeadm.go:403] duration metric: took 63.734122ms to StartCluster
	I1208 01:35:27.843051  998155 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.843130  998155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:35:27.844013  998155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:35:27.844254  998155 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:35:27.844525  998155 config.go:182] Loaded profile config "pause-814452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:35:27.844595  998155 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:35:27.850788  998155 out.go:179] * Verifying Kubernetes components...
	I1208 01:35:27.850804  998155 out.go:179] * Enabled addons: 
	I1208 01:35:27.853627  998155 addons.go:530] duration metric: took 9.02619ms for enable addons: enabled=[]
	I1208 01:35:27.853669  998155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:35:27.991677  998155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:35:28.007003  998155 node_ready.go:35] waiting up to 6m0s for node "pause-814452" to be "Ready" ...
	I1208 01:35:32.542327  998155 node_ready.go:49] node "pause-814452" is "Ready"
	I1208 01:35:32.542359  998155 node_ready.go:38] duration metric: took 4.535302593s for node "pause-814452" to be "Ready" ...
	I1208 01:35:32.542374  998155 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:35:32.542434  998155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:35:32.560363  998155 api_server.go:72] duration metric: took 4.716071727s to wait for apiserver process to appear ...
	I1208 01:35:32.560386  998155 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:35:32.560404  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:32.574774  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1208 01:35:32.574805  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1208 01:35:33.061487  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:33.069770  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 01:35:33.069804  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 01:35:33.561173  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:33.574135  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 01:35:33.574162  998155 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 01:35:34.060737  998155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:35:34.068928  998155 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:35:34.070029  998155 api_server.go:141] control plane version: v1.34.2
	I1208 01:35:34.070065  998155 api_server.go:131] duration metric: took 1.509669793s to wait for apiserver health ...
	I1208 01:35:34.070074  998155 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:35:34.073666  998155 system_pods.go:59] 7 kube-system pods found
	I1208 01:35:34.073700  998155 system_pods.go:61] "coredns-66bc5c9577-2sqj2" [d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98] Running
	I1208 01:35:34.073708  998155 system_pods.go:61] "etcd-pause-814452" [5935b236-05dd-4a50-8588-e915e6310e70] Running
	I1208 01:35:34.073713  998155 system_pods.go:61] "kindnet-ckhk6" [e09c741f-bc11-4b44-bd32-16d50b32078a] Running
	I1208 01:35:34.073716  998155 system_pods.go:61] "kube-apiserver-pause-814452" [21d3912f-469f-4184-9f64-61be846921d6] Running
	I1208 01:35:34.073721  998155 system_pods.go:61] "kube-controller-manager-pause-814452" [116a88a8-26c9-4620-a375-d8495ac0fc4f] Running
	I1208 01:35:34.073725  998155 system_pods.go:61] "kube-proxy-r58c9" [fadb8bf1-b94e-45ea-8bd9-0b456753562e] Running
	I1208 01:35:34.073734  998155 system_pods.go:61] "kube-scheduler-pause-814452" [8c20c133-c24c-4990-beae-f2e7d56795eb] Running
	I1208 01:35:34.073739  998155 system_pods.go:74] duration metric: took 3.660461ms to wait for pod list to return data ...
	I1208 01:35:34.073756  998155 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:35:34.076596  998155 default_sa.go:45] found service account: "default"
	I1208 01:35:34.076625  998155 default_sa.go:55] duration metric: took 2.863093ms for default service account to be created ...
	I1208 01:35:34.076644  998155 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:35:34.079876  998155 system_pods.go:86] 7 kube-system pods found
	I1208 01:35:34.079906  998155 system_pods.go:89] "coredns-66bc5c9577-2sqj2" [d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98] Running
	I1208 01:35:34.079913  998155 system_pods.go:89] "etcd-pause-814452" [5935b236-05dd-4a50-8588-e915e6310e70] Running
	I1208 01:35:34.079918  998155 system_pods.go:89] "kindnet-ckhk6" [e09c741f-bc11-4b44-bd32-16d50b32078a] Running
	I1208 01:35:34.079923  998155 system_pods.go:89] "kube-apiserver-pause-814452" [21d3912f-469f-4184-9f64-61be846921d6] Running
	I1208 01:35:34.079928  998155 system_pods.go:89] "kube-controller-manager-pause-814452" [116a88a8-26c9-4620-a375-d8495ac0fc4f] Running
	I1208 01:35:34.079932  998155 system_pods.go:89] "kube-proxy-r58c9" [fadb8bf1-b94e-45ea-8bd9-0b456753562e] Running
	I1208 01:35:34.079936  998155 system_pods.go:89] "kube-scheduler-pause-814452" [8c20c133-c24c-4990-beae-f2e7d56795eb] Running
	I1208 01:35:34.079942  998155 system_pods.go:126] duration metric: took 3.292809ms to wait for k8s-apps to be running ...
	I1208 01:35:34.079954  998155 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:35:34.080015  998155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:35:34.093852  998155 system_svc.go:56] duration metric: took 13.888429ms WaitForService to wait for kubelet
	I1208 01:35:34.093882  998155 kubeadm.go:587] duration metric: took 6.249596721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:35:34.093902  998155 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:35:34.097129  998155 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:35:34.097162  998155 node_conditions.go:123] node cpu capacity is 2
	I1208 01:35:34.097176  998155 node_conditions.go:105] duration metric: took 3.268767ms to run NodePressure ...
	I1208 01:35:34.097190  998155 start.go:242] waiting for startup goroutines ...
	I1208 01:35:34.097197  998155 start.go:247] waiting for cluster config update ...
	I1208 01:35:34.097205  998155 start.go:256] writing updated cluster config ...
	I1208 01:35:34.097526  998155 ssh_runner.go:195] Run: rm -f paused
	I1208 01:35:34.101940  998155 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:35:34.102649  998155 kapi.go:59] client config for pause-814452: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/profiles/pause-814452/client.key", CAFile:"/home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:35:34.105983  998155 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2sqj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.111370  998155 pod_ready.go:94] pod "coredns-66bc5c9577-2sqj2" is "Ready"
	I1208 01:35:34.111402  998155 pod_ready.go:86] duration metric: took 5.390509ms for pod "coredns-66bc5c9577-2sqj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.114022  998155 pod_ready.go:83] waiting for pod "etcd-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.118609  998155 pod_ready.go:94] pod "etcd-pause-814452" is "Ready"
	I1208 01:35:34.118683  998155 pod_ready.go:86] duration metric: took 4.624411ms for pod "etcd-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.120961  998155 pod_ready.go:83] waiting for pod "kube-apiserver-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.125599  998155 pod_ready.go:94] pod "kube-apiserver-pause-814452" is "Ready"
	I1208 01:35:34.125630  998155 pod_ready.go:86] duration metric: took 4.644243ms for pod "kube-apiserver-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.128200  998155 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.506304  998155 pod_ready.go:94] pod "kube-controller-manager-pause-814452" is "Ready"
	I1208 01:35:34.506332  998155 pod_ready.go:86] duration metric: took 378.107079ms for pod "kube-controller-manager-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:34.706349  998155 pod_ready.go:83] waiting for pod "kube-proxy-r58c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.106772  998155 pod_ready.go:94] pod "kube-proxy-r58c9" is "Ready"
	I1208 01:35:35.106804  998155 pod_ready.go:86] duration metric: took 400.426474ms for pod "kube-proxy-r58c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.305724  998155 pod_ready.go:83] waiting for pod "kube-scheduler-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.706723  998155 pod_ready.go:94] pod "kube-scheduler-pause-814452" is "Ready"
	I1208 01:35:35.706754  998155 pod_ready.go:86] duration metric: took 400.994692ms for pod "kube-scheduler-pause-814452" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:35:35.706767  998155 pod_ready.go:40] duration metric: took 1.604787064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:35:35.766402  998155 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:35:35.771860  998155 out.go:179] * Done! kubectl is now configured to use "pause-814452" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.423599271Z" level=info msg="Created container 0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652: kube-system/kube-scheduler-pause-814452/kube-scheduler" id=68b2525e-013b-404d-9e6d-0e1577333d2a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.424040475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.424334353Z" level=info msg="Starting container: 0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652" id=312e197a-bb1f-43b0-be51-318c8ced4601 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.439360077Z" level=info msg="Started container" PID=2385 containerID=0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652 description=kube-system/kube-scheduler-pause-814452/kube-scheduler id=312e197a-bb1f-43b0-be51-318c8ced4601 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7a1816b365da6baac6edff7e4e4d01ea75c51176e2d67142e08a365ddc6a5aa
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.460237526Z" level=info msg="Created container 5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70: kube-system/kube-proxy-r58c9/kube-proxy" id=422697aa-50e8-4d85-ac78-33744e00e66c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.460888874Z" level=info msg="Starting container: 5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70" id=4c90b6b0-8bcf-44bb-95c4-a08c5d8da89a name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.467071689Z" level=info msg="Created container 516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9: kube-system/coredns-66bc5c9577-2sqj2/coredns" id=41eecfee-6cc3-48a6-aacd-47df591a6453 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.467778643Z" level=info msg="Starting container: 516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9" id=9a7beb22-7416-464f-84ed-a3cf7fb0f78e name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.4682933Z" level=info msg="Started container" PID=2401 containerID=5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70 description=kube-system/kube-proxy-r58c9/kube-proxy id=4c90b6b0-8bcf-44bb-95c4-a08c5d8da89a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c0a0ede1ff934c8af1b1fab6636e1e02c48aaec5e65285df6aadb6d4a2965e
	Dec 08 01:35:28 pause-814452 crio[2083]: time="2025-12-08T01:35:28.473610142Z" level=info msg="Started container" PID=2416 containerID=516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9 description=kube-system/coredns-66bc5c9577-2sqj2/coredns id=9a7beb22-7416-464f-84ed-a3cf7fb0f78e name=/runtime.v1.RuntimeService/StartContainer sandboxID=311d58bbe8eb4a5f187b30137e729ceedd99614db2a225841568e5804d0f8146
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.683343848Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.687978113Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.688010983Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.688034943Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.691138875Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.691171655Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.691202211Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.695627498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.695805633Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.695829174Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.708576813Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.708616682Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.708645524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.712452751Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:35:38 pause-814452 crio[2083]: time="2025-12-08T01:35:38.712524883Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	516e6700d90fc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   12 seconds ago       Running             coredns                   1                   311d58bbe8eb4       coredns-66bc5c9577-2sqj2               kube-system
	a3680b7de4124       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   12 seconds ago       Running             kindnet-cni               1                   08d9d89026b5e       kindnet-ckhk6                          kube-system
	5e4a8988656de       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   12 seconds ago       Running             kube-proxy                1                   c3c0a0ede1ff9       kube-proxy-r58c9                       kube-system
	0ff169efe568f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   12 seconds ago       Running             kube-scheduler            1                   c7a1816b365da       kube-scheduler-pause-814452            kube-system
	f519779927d0c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   12 seconds ago       Running             etcd                      1                   929cbc05654b3       etcd-pause-814452                      kube-system
	72fae15be4bbe       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   12 seconds ago       Running             kube-apiserver            1                   70beef3348fc0       kube-apiserver-pause-814452            kube-system
	77c31b6f8ace1       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   12 seconds ago       Running             kube-controller-manager   1                   4892415340a4f       kube-controller-manager-pause-814452   kube-system
	0c48e24a4f283       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Exited              coredns                   0                   311d58bbe8eb4       coredns-66bc5c9577-2sqj2               kube-system
	c5bcaed767ee2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   c3c0a0ede1ff9       kube-proxy-r58c9                       kube-system
	4c6686bd89422       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   08d9d89026b5e       kindnet-ckhk6                          kube-system
	a6c51a18766ff       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   4892415340a4f       kube-controller-manager-pause-814452   kube-system
	1ae0f5b191d45       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   70beef3348fc0       kube-apiserver-pause-814452            kube-system
	fc971b2b759f0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   929cbc05654b3       etcd-pause-814452                      kube-system
	8cd16f9edc4e1       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   c7a1816b365da       kube-scheduler-pause-814452            kube-system
	
	
	==> coredns [0c48e24a4f2830f84c268d5747efefe86bff53f180a2996a1ce53cafe8084888] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35011 - 40916 "HINFO IN 4410955698505796276.6222712833875172857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029297332s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [516e6700d90fc2fd367c7a012472926921fdb64f65d3d263c6daab700ad605c9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59534 - 8451 "HINFO IN 3564368914563567705.1534246744001046578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02313337s
	
	
	==> describe nodes <==
	Name:               pause-814452
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-814452
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=pause-814452
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_34_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-814452
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:34:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:35:30 +0000   Mon, 08 Dec 2025 01:35:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-814452
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                79d1954f-3523-43b9-be94-ccedb1953bc7
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2sqj2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     66s
	  kube-system                 etcd-pause-814452                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         71s
	  kube-system                 kindnet-ckhk6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      66s
	  kube-system                 kube-apiserver-pause-814452             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-pause-814452    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-r58c9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-814452             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 65s   kube-proxy       
	  Normal   Starting                 7s    kube-proxy       
	  Normal   Starting                 72s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s   kubelet          Node pause-814452 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s   kubelet          Node pause-814452 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s   kubelet          Node pause-814452 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           67s   node-controller  Node pause-814452 event: Registered Node pause-814452 in Controller
	  Normal   NodeReady                26s   kubelet          Node pause-814452 status is now: NodeReady
	  Normal   RegisteredNode           5s    node-controller  Node pause-814452 event: Registered Node pause-814452 in Controller
	
	
	==> dmesg <==
	[Dec 8 00:59] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:00] overlayfs: idmapped layers are currently not supported
	[  +3.041176] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:01] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:02] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:03] overlayfs: idmapped layers are currently not supported
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f519779927d0c1cacc137e51e3a87c1d92770575130264c402a32b88cbc9b9ff] <==
	{"level":"warn","ts":"2025-12-08T01:35:31.221623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.243013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.264576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.316101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.341059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.362221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.379841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.396643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.415455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.442026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.455527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.472441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.489745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.506809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.536315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.549982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.567925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.584734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.608035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.621150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.643878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.671120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.696956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.714097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:35:31.807716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	
	
	==> etcd [fc971b2b759f03f6993d33ca1968dc6621e7f4909a2d9bd81322d3748d26531c] <==
	{"level":"warn","ts":"2025-12-08T01:34:25.324766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.348093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.366540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.409420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.434895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.442908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:34:25.515537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T01:35:19.696997Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-08T01:35:19.697056Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-814452","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-08T01:35:19.699424Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T01:35:19.833898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T01:35:19.835333Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.835381Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-08T01:35:19.835447Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-08T01:35:19.835465Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835603Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T01:35:19.835637Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835707Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T01:35:19.835728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T01:35:19.835736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.838672Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-08T01:35:19.838743Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T01:35:19.838809Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-08T01:35:19.838867Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-814452","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 01:35:41 up  6:17,  0 user,  load average: 2.44, 1.66, 1.72
	Linux pause-814452 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c6686bd894226339c6a58287e9c746951355d8aaa7b2766d750bafe6e0ef125] <==
	I1208 01:34:34.639106       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:34:34.639340       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:34:34.639471       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:34:34.639488       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:34:34.639498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:34:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:34:34.836591       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:34:34.840074       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:34:34.840179       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:34:34.840343       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:35:04.837086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:35:04.837087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:35:04.837131       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1208 01:35:04.837206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1208 01:35:05.840584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:35:05.840627       1 metrics.go:72] Registering metrics
	I1208 01:35:05.840702       1 controller.go:711] "Syncing nftables rules"
	I1208 01:35:14.836123       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:35:14.836180       1 main.go:301] handling current node
	
	
	==> kindnet [a3680b7de412443e1c8d12250ee2cede99b8d1fdbbf8c22bb9dac29dd95111c7] <==
	I1208 01:35:28.524448       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:35:28.526965       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:35:28.528668       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:35:28.528745       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:35:28.528780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:35:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:35:28.680059       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:35:28.721582       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:35:28.721682       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:35:28.723637       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1208 01:35:32.722804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:35:32.723694       1 metrics.go:72] Registering metrics
	I1208 01:35:32.723771       1 controller.go:711] "Syncing nftables rules"
	I1208 01:35:38.682924       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:35:38.682974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ae0f5b191d45f4d7539586c6bdb8bd3d6d55dccdde8a1958d7a61d2aab11e16] <==
	W1208 01:35:19.715372       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715433       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715490       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715546       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.715614       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716058       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716119       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716165       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716325       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716375       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716420       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716469       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716515       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716565       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716615       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716662       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716707       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716762       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716810       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716854       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716898       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.716945       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717051       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717851       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1208 01:35:19.717924       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [72fae15be4bbe10f53114796f4ea74adf286d63828cec603838a2da804289607] <==
	I1208 01:35:32.336047       1 cluster_authentication_trust_controller.go:459] Starting cluster_authentication_trust_controller controller
	I1208 01:35:32.472318       1 shared_informer.go:349] "Waiting for caches to sync" controller="cluster_authentication_trust_controller"
	I1208 01:35:32.612195       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1208 01:35:32.621755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1208 01:35:32.621793       1 policy_source.go:240] refreshing policies
	I1208 01:35:32.637275       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:35:32.643082       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:35:32.655725       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:35:32.664909       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 01:35:32.665066       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 01:35:32.672065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1208 01:35:32.672076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:35:32.672213       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:35:32.672236       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 01:35:32.672344       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:35:32.672094       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:35:32.672109       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1208 01:35:32.672686       1 aggregator.go:171] initial CRD sync complete...
	I1208 01:35:32.672725       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 01:35:32.672768       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:35:32.672804       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:35:32.673155       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1208 01:35:32.679268       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:35:33.346020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:35:34.591190       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [77c31b6f8ace198a4aefaf71d8555126a3bf7bcca5e2a25630e54718ce6197f8] <==
	I1208 01:35:36.003497       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1208 01:35:36.010319       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:35:36.012658       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1208 01:35:36.015032       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 01:35:36.017033       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:35:36.019820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1208 01:35:36.024381       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 01:35:36.024514       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:35:36.024665       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:35:36.031525       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 01:35:36.034739       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 01:35:36.037492       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1208 01:35:36.040983       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:35:36.043016       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:35:36.043140       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:35:36.043370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:35:36.043649       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:35:36.043708       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:35:36.043751       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1208 01:35:36.044737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:35:36.044822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:35:36.044978       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 01:35:36.047536       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:35:36.057053       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 01:35:36.068778       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [a6c51a18766ffb95f6d93fbfd8a679375af0a21a094f3f35aaa2b242af11341a] <==
	I1208 01:34:33.408094       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 01:34:33.408685       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:34:33.407631       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1208 01:34:33.409732       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:34:33.410065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:34:33.411283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:34:33.411703       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1208 01:34:33.414064       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1208 01:34:33.414143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:34:33.414156       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1208 01:34:33.414241       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1208 01:34:33.414260       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1208 01:34:33.414266       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 01:34:33.414407       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:34:33.414543       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:34:33.414642       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-814452"
	I1208 01:34:33.414711       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1208 01:34:33.417467       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 01:34:33.420104       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:34:33.420133       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:34:33.420141       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:34:33.420588       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:34:33.432525       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-814452" podCIDRs=["10.244.0.0/24"]
	I1208 01:34:33.432623       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:35:18.422400       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e4a8988656de6251831dc3af5977f1e2145ecdb4887f7bc572b65e5f787ec70] <==
	I1208 01:35:28.593488       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:35:30.043982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1208 01:35:32.601637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-814452\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1208 01:35:33.716725       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:35:33.716772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:35:33.716863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:35:33.768315       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:35:33.768374       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:35:33.775010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:35:33.775342       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:35:33.775366       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:35:33.777014       1 config.go:200] "Starting service config controller"
	I1208 01:35:33.777041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:35:33.777060       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:35:33.777064       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:35:33.777089       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:35:33.777094       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:35:33.778028       1 config.go:309] "Starting node config controller"
	I1208 01:35:33.778061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:35:33.778068       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:35:33.877563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:35:33.877572       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:35:33.877610       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5bcaed767ee20798c289a81cff78ee72118c9dac05df51f051e1c6f897a67dc] <==
	I1208 01:34:34.714126       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:34:34.856474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:34:34.980061       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:34:34.980094       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:34:34.980169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:34:35.044186       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:34:35.044313       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:34:35.049500       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:34:35.049859       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:34:35.050078       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:34:35.066153       1 config.go:200] "Starting service config controller"
	I1208 01:34:35.066257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:34:35.066314       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:34:35.066343       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:34:35.066389       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:34:35.066418       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:34:35.067295       1 config.go:309] "Starting node config controller"
	I1208 01:34:35.067364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:34:35.067394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:34:35.167123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:34:35.167166       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:34:35.167205       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ff169efe568ffa1e39892d79697a5441e19cf82f624ad456cb7d320a5edb652] <==
	I1208 01:35:30.807403       1 serving.go:386] Generated self-signed cert in-memory
	W1208 01:35:32.439063       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 01:35:32.439190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 01:35:32.439226       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 01:35:32.439382       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 01:35:32.590087       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:35:32.590552       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:35:32.602759       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:35:32.603039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:32.603099       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:32.603144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:35:32.703413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8cd16f9edc4e1e4022aaa7e6b1d47dde1604ec4d2a8f7dc7c66f7b38f6853d14] <==
	E1208 01:34:26.405195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:34:26.405257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 01:34:26.405315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 01:34:26.405372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 01:34:26.405422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 01:34:26.405479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 01:34:26.405573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 01:34:26.405651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 01:34:26.408599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 01:34:27.226002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 01:34:27.291361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:34:27.299220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 01:34:27.318504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 01:34:27.319715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 01:34:27.343488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 01:34:27.521651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 01:34:27.575865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 01:34:27.588387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1208 01:34:27.958451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:19.690768       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1208 01:35:19.690797       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1208 01:35:19.690818       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1208 01:35:19.691084       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:35:19.691389       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1208 01:35:19.691408       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="e09c741f-bc11-4b44-bd32-16d50b32078a" pod="kube-system/kindnet-ckhk6"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.593154    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-proxy-r58c9" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="fadb8bf1-b94e-45ea-8bd9-0b456753562e" pod="kube-system/kube-proxy-r58c9"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.594094    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "coredns-66bc5c9577-2sqj2" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="d86f0b0f-fe4f-4c56-ab4e-b56d4ff27d98" pod="kube-system/coredns-66bc5c9577-2sqj2"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.595141    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "etcd-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="306159274b6322f70291775224f95d81" pod="kube-system/etcd-pause-814452"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.596231    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-scheduler-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="00c12c8f188df6cf0bae7798770b792d" pod="kube-system/kube-scheduler-pause-814452"
	Dec 08 01:35:32 pause-814452 kubelet[1318]: E1208 01:35:32.597116    1318 status_manager.go:1018] "Failed to get status for pod" err=<
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         pods "kube-apiserver-pause-814452" is forbidden: User "system:node:pause-814452" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-814452' and this object
	Dec 08 01:35:32 pause-814452 kubelet[1318]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Dec 08 01:35:32 pause-814452 kubelet[1318]:  > podUID="ab433440dd54af3441c62dc268aef562" pod="kube-system/kube-apiserver-pause-814452"
	Dec 08 01:35:36 pause-814452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:35:36 pause-814452 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:35:36 pause-814452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-814452 -n pause-814452
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-814452 -n pause-814452: exit status 2 (364.796007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-814452 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.917305ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:39:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-661561 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-661561 describe deploy/metrics-server -n kube-system: exit status 1 (91.521299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-661561 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-661561
helpers_test.go:243: (dbg) docker inspect old-k8s-version-661561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	        "Created": "2025-12-08T01:37:59.095493293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1013171,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:37:59.168276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hostname",
	        "HostsPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hosts",
	        "LogPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f-json.log",
	        "Name": "/old-k8s-version-661561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-661561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-661561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	                "LowerDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-661561",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-661561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-661561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72c901cc745786efcf63513b9dadc0a449b5a53b46943d671636d3d727774a9b",
	            "SandboxKey": "/var/run/docker/netns/72c901cc7457",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33773"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33774"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-661561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f3:09:97:7c:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f564ce91e5f1a7355aa0c3c6eaf3b409225f9ea728cbb26fa06f64c7acc7ac75",
	                    "EndpointID": "0db5c78e7d61c49fb1afcfe1b7c157f87555ec273e59b9c2306ce21fb8f7d84f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-661561",
	                        "bab08c504dac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25: (1.277340334s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-000739 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo docker system info                                                                                                                                                                                                      │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo containerd config dump                                                                                                                                                                                                  │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo crio config                                                                                                                                                                                                             │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ delete  │ -p cilium-000739                                                                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:37:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:37:53.278021 1012781 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:37:53.278233 1012781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:37:53.278260 1012781 out.go:374] Setting ErrFile to fd 2...
	I1208 01:37:53.278283 1012781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:37:53.278691 1012781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:37:53.279291 1012781 out.go:368] Setting JSON to false
	I1208 01:37:53.280284 1012781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22806,"bootTime":1765135068,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:37:53.280406 1012781 start.go:143] virtualization:  
	I1208 01:37:53.284142 1012781 out.go:179] * [old-k8s-version-661561] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:37:53.286697 1012781 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:37:53.286773 1012781 notify.go:221] Checking for updates...
	I1208 01:37:53.290929 1012781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:37:53.293941 1012781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:37:53.296917 1012781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:37:53.299962 1012781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:37:53.302899 1012781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:37:53.306312 1012781 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:37:53.306499 1012781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:37:53.334472 1012781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:37:53.334592 1012781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:37:53.395733 1012781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:37:53.386525256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:37:53.395857 1012781 docker.go:319] overlay module found
	I1208 01:37:53.399117 1012781 out.go:179] * Using the docker driver based on user configuration
	I1208 01:37:53.402048 1012781 start.go:309] selected driver: docker
	I1208 01:37:53.402068 1012781 start.go:927] validating driver "docker" against <nil>
	I1208 01:37:53.402097 1012781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:37:53.402939 1012781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:37:53.456721 1012781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:37:53.446576816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:37:53.456872 1012781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:37:53.457096 1012781 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:37:53.460073 1012781 out.go:179] * Using Docker driver with root privileges
	I1208 01:37:53.463093 1012781 cni.go:84] Creating CNI manager for ""
	I1208 01:37:53.463175 1012781 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:37:53.463189 1012781 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:37:53.463267 1012781 start.go:353] cluster config:
	{Name:old-k8s-version-661561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-661561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:37:53.466509 1012781 out.go:179] * Starting "old-k8s-version-661561" primary control-plane node in "old-k8s-version-661561" cluster
	I1208 01:37:53.469367 1012781 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:37:53.472379 1012781 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:37:53.475339 1012781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 01:37:53.475395 1012781 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:37:53.475405 1012781 cache.go:65] Caching tarball of preloaded images
	I1208 01:37:53.475431 1012781 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:37:53.475501 1012781 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:37:53.475511 1012781 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1208 01:37:53.475625 1012781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/config.json ...
	I1208 01:37:53.475650 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/config.json: {Name:mk0298f7055a346f8231107570e459d1e81d5c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:53.495980 1012781 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:37:53.496005 1012781 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:37:53.496020 1012781 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:37:53.496053 1012781 start.go:360] acquireMachinesLock for old-k8s-version-661561: {Name:mk7768563a752a1561372dcac25cc4a6bd2144dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:53.496159 1012781 start.go:364] duration metric: took 84.875µs to acquireMachinesLock for "old-k8s-version-661561"
	I1208 01:37:53.496191 1012781 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-661561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-661561 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:37:53.496374 1012781 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:37:53.501748 1012781 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:37:53.502041 1012781 start.go:159] libmachine.API.Create for "old-k8s-version-661561" (driver="docker")
	I1208 01:37:53.502082 1012781 client.go:173] LocalClient.Create starting
	I1208 01:37:53.502175 1012781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:37:53.502222 1012781 main.go:143] libmachine: Decoding PEM data...
	I1208 01:37:53.502240 1012781 main.go:143] libmachine: Parsing certificate...
	I1208 01:37:53.502309 1012781 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:37:53.502358 1012781 main.go:143] libmachine: Decoding PEM data...
	I1208 01:37:53.502381 1012781 main.go:143] libmachine: Parsing certificate...
	I1208 01:37:53.502927 1012781 cli_runner.go:164] Run: docker network inspect old-k8s-version-661561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:37:53.523191 1012781 cli_runner.go:211] docker network inspect old-k8s-version-661561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:37:53.523304 1012781 network_create.go:284] running [docker network inspect old-k8s-version-661561] to gather additional debugging logs...
	I1208 01:37:53.523330 1012781 cli_runner.go:164] Run: docker network inspect old-k8s-version-661561
	W1208 01:37:53.538250 1012781 cli_runner.go:211] docker network inspect old-k8s-version-661561 returned with exit code 1
	I1208 01:37:53.538286 1012781 network_create.go:287] error running [docker network inspect old-k8s-version-661561]: docker network inspect old-k8s-version-661561: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-661561 not found
	I1208 01:37:53.538308 1012781 network_create.go:289] output of [docker network inspect old-k8s-version-661561]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-661561 not found
	
	** /stderr **
	I1208 01:37:53.538436 1012781 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:37:53.555611 1012781 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:37:53.555979 1012781 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:37:53.556315 1012781 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:37:53.556743 1012781 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001999410}
	I1208 01:37:53.556766 1012781 network_create.go:124] attempt to create docker network old-k8s-version-661561 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:37:53.556830 1012781 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-661561 old-k8s-version-661561
	I1208 01:37:53.630160 1012781 network_create.go:108] docker network old-k8s-version-661561 192.168.76.0/24 created
	I1208 01:37:53.630194 1012781 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-661561" container
	I1208 01:37:53.630266 1012781 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:37:53.646623 1012781 cli_runner.go:164] Run: docker volume create old-k8s-version-661561 --label name.minikube.sigs.k8s.io=old-k8s-version-661561 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:37:53.665684 1012781 oci.go:103] Successfully created a docker volume old-k8s-version-661561
	I1208 01:37:53.665768 1012781 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-661561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-661561 --entrypoint /usr/bin/test -v old-k8s-version-661561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:37:54.198992 1012781 oci.go:107] Successfully prepared a docker volume old-k8s-version-661561
	I1208 01:37:54.199081 1012781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 01:37:54.199099 1012781 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:37:54.199208 1012781 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-661561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:37:59.013781 1012781 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-661561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.814517058s)
	I1208 01:37:59.013817 1012781 kic.go:203] duration metric: took 4.81471491s to extract preloaded images to volume ...
	W1208 01:37:59.013962 1012781 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:37:59.014075 1012781 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:37:59.078161 1012781 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-661561 --name old-k8s-version-661561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-661561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-661561 --network old-k8s-version-661561 --ip 192.168.76.2 --volume old-k8s-version-661561:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:37:59.388173 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Running}}
	I1208 01:37:59.409506 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:37:59.435893 1012781 cli_runner.go:164] Run: docker exec old-k8s-version-661561 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:37:59.503305 1012781 oci.go:144] the created container "old-k8s-version-661561" has a running status.
	I1208 01:37:59.503330 1012781 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa...
	I1208 01:37:59.758814 1012781 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:37:59.779451 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:37:59.797680 1012781 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:37:59.797705 1012781 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-661561 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:37:59.839989 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:37:59.857352 1012781 machine.go:94] provisionDockerMachine start ...
	I1208 01:37:59.857464 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:37:59.874487 1012781 main.go:143] libmachine: Using SSH client type: native
	I1208 01:37:59.874942 1012781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1208 01:37:59.874961 1012781 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:37:59.875604 1012781 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52220->127.0.0.1:33772: read: connection reset by peer
	I1208 01:38:03.031134 1012781 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-661561
	
	I1208 01:38:03.031166 1012781 ubuntu.go:182] provisioning hostname "old-k8s-version-661561"
	I1208 01:38:03.031233 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:03.055902 1012781 main.go:143] libmachine: Using SSH client type: native
	I1208 01:38:03.056221 1012781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1208 01:38:03.056240 1012781 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-661561 && echo "old-k8s-version-661561" | sudo tee /etc/hostname
	I1208 01:38:03.220606 1012781 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-661561
	
	I1208 01:38:03.220703 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:03.238417 1012781 main.go:143] libmachine: Using SSH client type: native
	I1208 01:38:03.238737 1012781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1208 01:38:03.238760 1012781 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-661561' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-661561/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-661561' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:38:03.390925 1012781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:38:03.390953 1012781 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:38:03.390974 1012781 ubuntu.go:190] setting up certificates
	I1208 01:38:03.390985 1012781 provision.go:84] configureAuth start
	I1208 01:38:03.391045 1012781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-661561
	I1208 01:38:03.408416 1012781 provision.go:143] copyHostCerts
	I1208 01:38:03.408503 1012781 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:38:03.408518 1012781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:38:03.408594 1012781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:38:03.408700 1012781 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:38:03.408711 1012781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:38:03.408738 1012781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:38:03.408802 1012781 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:38:03.408811 1012781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:38:03.408838 1012781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:38:03.408910 1012781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-661561 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-661561]
	I1208 01:38:03.543835 1012781 provision.go:177] copyRemoteCerts
	I1208 01:38:03.543910 1012781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:38:03.543956 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:03.560421 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:03.666559 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:38:03.684392 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1208 01:38:03.702587 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:38:03.720828 1012781 provision.go:87] duration metric: took 329.818658ms to configureAuth
	I1208 01:38:03.720854 1012781 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:38:03.721043 1012781 config.go:182] Loaded profile config "old-k8s-version-661561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1208 01:38:03.721160 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:03.739913 1012781 main.go:143] libmachine: Using SSH client type: native
	I1208 01:38:03.740239 1012781 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33772 <nil> <nil>}
	I1208 01:38:03.740260 1012781 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:38:04.062648 1012781 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:38:04.062673 1012781 machine.go:97] duration metric: took 4.205293802s to provisionDockerMachine
	I1208 01:38:04.062684 1012781 client.go:176] duration metric: took 10.560595093s to LocalClient.Create
	I1208 01:38:04.062712 1012781 start.go:167] duration metric: took 10.56067045s to libmachine.API.Create "old-k8s-version-661561"
	I1208 01:38:04.062725 1012781 start.go:293] postStartSetup for "old-k8s-version-661561" (driver="docker")
	I1208 01:38:04.062735 1012781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:38:04.062803 1012781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:38:04.062909 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:04.083559 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:04.191316 1012781 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:38:04.194884 1012781 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:38:04.194914 1012781 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:38:04.194932 1012781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:38:04.194987 1012781 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:38:04.195071 1012781 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:38:04.195181 1012781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:38:04.202798 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:38:04.221558 1012781 start.go:296] duration metric: took 158.817752ms for postStartSetup
	I1208 01:38:04.221929 1012781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-661561
	I1208 01:38:04.240558 1012781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/config.json ...
	I1208 01:38:04.240868 1012781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:38:04.240946 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:04.260449 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:04.364010 1012781 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:38:04.369076 1012781 start.go:128] duration metric: took 10.872685891s to createHost
	I1208 01:38:04.369103 1012781 start.go:83] releasing machines lock for "old-k8s-version-661561", held for 10.872928511s
	I1208 01:38:04.369173 1012781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-661561
	I1208 01:38:04.393626 1012781 ssh_runner.go:195] Run: cat /version.json
	I1208 01:38:04.393659 1012781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:38:04.393702 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:04.393719 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:04.414968 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:04.422581 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:04.620565 1012781 ssh_runner.go:195] Run: systemctl --version
	I1208 01:38:04.627549 1012781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:38:04.665380 1012781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:38:04.670542 1012781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:38:04.670631 1012781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:38:04.700652 1012781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:38:04.700676 1012781 start.go:496] detecting cgroup driver to use...
	I1208 01:38:04.700708 1012781 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:38:04.700772 1012781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:38:04.718873 1012781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:38:04.735533 1012781 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:38:04.735596 1012781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:38:04.752756 1012781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:38:04.772635 1012781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:38:04.895497 1012781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:38:05.054765 1012781 docker.go:234] disabling docker service ...
	I1208 01:38:05.054926 1012781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:38:05.080680 1012781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:38:05.095909 1012781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:38:05.227630 1012781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:38:05.343857 1012781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:38:05.357525 1012781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:38:05.373008 1012781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1208 01:38:05.373072 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.382348 1012781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:38:05.382412 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.392276 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.401805 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.411233 1012781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:38:05.420158 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.429342 1012781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.442642 1012781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:38:05.451652 1012781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:38:05.459425 1012781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:38:05.466688 1012781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:38:05.592393 1012781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:38:05.754784 1012781 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:38:05.754895 1012781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:38:05.758743 1012781 start.go:564] Will wait 60s for crictl version
	I1208 01:38:05.758832 1012781 ssh_runner.go:195] Run: which crictl
	I1208 01:38:05.762451 1012781 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:38:05.788302 1012781 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:38:05.788417 1012781 ssh_runner.go:195] Run: crio --version
	I1208 01:38:05.819275 1012781 ssh_runner.go:195] Run: crio --version
	I1208 01:38:05.854194 1012781 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1208 01:38:05.857063 1012781 cli_runner.go:164] Run: docker network inspect old-k8s-version-661561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:38:05.873439 1012781 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:38:05.877678 1012781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:38:05.894808 1012781 kubeadm.go:884] updating cluster {Name:old-k8s-version-661561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-661561 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:38:05.894953 1012781 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 01:38:05.895018 1012781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:38:05.929873 1012781 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:38:05.929901 1012781 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:38:05.929984 1012781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:38:05.957253 1012781 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:38:05.957278 1012781 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:38:05.957286 1012781 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1208 01:38:05.957377 1012781 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-661561 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-661561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:38:05.957459 1012781 ssh_runner.go:195] Run: crio config
	I1208 01:38:06.016965 1012781 cni.go:84] Creating CNI manager for ""
	I1208 01:38:06.016992 1012781 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:38:06.017016 1012781 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:38:06.017039 1012781 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-661561 NodeName:old-k8s-version-661561 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:38:06.017207 1012781 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-661561"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:38:06.017287 1012781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1208 01:38:06.025805 1012781 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:38:06.025927 1012781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:38:06.034279 1012781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1208 01:38:06.051750 1012781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:38:06.068771 1012781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1208 01:38:06.082271 1012781 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:38:06.086142 1012781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:38:06.098710 1012781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:38:06.220554 1012781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:38:06.239178 1012781 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561 for IP: 192.168.76.2
	I1208 01:38:06.239248 1012781 certs.go:195] generating shared ca certs ...
	I1208 01:38:06.239282 1012781 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.239485 1012781 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:38:06.239570 1012781 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:38:06.239596 1012781 certs.go:257] generating profile certs ...
	I1208 01:38:06.239691 1012781 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.key
	I1208 01:38:06.239732 1012781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt with IP's: []
	I1208 01:38:06.461867 1012781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt ...
	I1208 01:38:06.461901 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: {Name:mkd35c757b71ab85b2ce108b0959dbe18504cfd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.462105 1012781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.key ...
	I1208 01:38:06.462122 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.key: {Name:mkbbfdf540085e99e78080a9826c405852b7319d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.462214 1012781 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key.5dfb4f98
	I1208 01:38:06.462235 1012781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt.5dfb4f98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:38:06.687653 1012781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt.5dfb4f98 ...
	I1208 01:38:06.687687 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt.5dfb4f98: {Name:mkcb96d006f8b02d522cfd46e11128b366d7fca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.687869 1012781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key.5dfb4f98 ...
	I1208 01:38:06.687884 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key.5dfb4f98: {Name:mk7b5091ed2b7a2e3a9e98feb65b9928e9d3daab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.687967 1012781 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt.5dfb4f98 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt
	I1208 01:38:06.688048 1012781 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key.5dfb4f98 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key
	I1208 01:38:06.688114 1012781 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.key
	I1208 01:38:06.688142 1012781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.crt with IP's: []
	I1208 01:38:06.883563 1012781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.crt ...
	I1208 01:38:06.883600 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.crt: {Name:mk13e97f83eb7b3506c4ef0869291e0b80770df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.883782 1012781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.key ...
	I1208 01:38:06.883797 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.key: {Name:mk89222158ef640f089b235c7a3b02d68af16eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:06.883982 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:38:06.884033 1012781 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:38:06.884047 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:38:06.884075 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:38:06.884103 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:38:06.884132 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:38:06.884184 1012781 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:38:06.884752 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:38:06.905172 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:38:06.927376 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:38:06.948160 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:38:06.969176 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1208 01:38:06.988540 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:38:07.025117 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:38:07.059071 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:38:07.079608 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:38:07.105413 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:38:07.125290 1012781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:38:07.143649 1012781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:38:07.157412 1012781 ssh_runner.go:195] Run: openssl version
	I1208 01:38:07.164243 1012781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:38:07.177650 1012781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:38:07.186078 1012781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:38:07.189830 1012781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:38:07.189933 1012781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:38:07.232543 1012781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:38:07.240225 1012781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:38:07.248560 1012781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:38:07.256715 1012781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:38:07.265492 1012781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:38:07.269476 1012781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:38:07.269546 1012781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:38:07.312742 1012781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:38:07.320469 1012781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:38:07.328201 1012781 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:38:07.335717 1012781 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:38:07.343686 1012781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:38:07.347733 1012781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:38:07.347798 1012781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:38:07.388378 1012781 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:38:07.395678 1012781 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:38:07.403075 1012781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:38:07.407010 1012781 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:38:07.407064 1012781 kubeadm.go:401] StartCluster: {Name:old-k8s-version-661561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-661561 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:38:07.407147 1012781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:38:07.407207 1012781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:38:07.434460 1012781 cri.go:89] found id: ""
	I1208 01:38:07.434545 1012781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:38:07.443437 1012781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:38:07.451952 1012781 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:38:07.452027 1012781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:38:07.460633 1012781 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:38:07.460667 1012781 kubeadm.go:158] found existing configuration files:
	
	I1208 01:38:07.460730 1012781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:38:07.468383 1012781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:38:07.468482 1012781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:38:07.475714 1012781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:38:07.483334 1012781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:38:07.483427 1012781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:38:07.490787 1012781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:38:07.498247 1012781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:38:07.498343 1012781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:38:07.505797 1012781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:38:07.513718 1012781 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:38:07.513808 1012781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:38:07.521364 1012781 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:38:07.569869 1012781 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1208 01:38:07.569932 1012781 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:38:07.609232 1012781 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:38:07.609310 1012781 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:38:07.609353 1012781 kubeadm.go:319] OS: Linux
	I1208 01:38:07.609403 1012781 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:38:07.609454 1012781 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:38:07.609506 1012781 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:38:07.609563 1012781 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:38:07.609616 1012781 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:38:07.609667 1012781 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:38:07.609724 1012781 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:38:07.609775 1012781 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:38:07.609826 1012781 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:38:07.693419 1012781 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:38:07.693576 1012781 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:38:07.693731 1012781 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 01:38:07.863838 1012781 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:38:07.870522 1012781 out.go:252]   - Generating certificates and keys ...
	I1208 01:38:07.870690 1012781 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:38:07.870792 1012781 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:38:08.323264 1012781 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:38:08.797406 1012781 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:38:09.062759 1012781 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:38:09.321389 1012781 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:38:09.925351 1012781 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:38:09.925496 1012781 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-661561] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:38:10.109843 1012781 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:38:10.110022 1012781 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-661561] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:38:10.573148 1012781 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:38:10.898807 1012781 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:38:11.274145 1012781 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:38:11.274431 1012781 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:38:12.038804 1012781 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:38:12.615004 1012781 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:38:13.089185 1012781 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:38:13.469278 1012781 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:38:13.470081 1012781 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:38:13.472955 1012781 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:38:13.476207 1012781 out.go:252]   - Booting up control plane ...
	I1208 01:38:13.476319 1012781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:38:13.476411 1012781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:38:13.476483 1012781 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:38:13.494008 1012781 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:38:13.495339 1012781 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:38:13.495425 1012781 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:38:13.624302 1012781 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 01:38:21.128607 1012781 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.504271 seconds
	I1208 01:38:21.128730 1012781 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 01:38:21.148970 1012781 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 01:38:21.681905 1012781 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 01:38:21.682117 1012781 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-661561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 01:38:22.194438 1012781 kubeadm.go:319] [bootstrap-token] Using token: f72ocj.tdr22b7g3l2lq5dd
	I1208 01:38:22.197396 1012781 out.go:252]   - Configuring RBAC rules ...
	I1208 01:38:22.197555 1012781 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 01:38:22.205879 1012781 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 01:38:22.219368 1012781 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 01:38:22.223609 1012781 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 01:38:22.229312 1012781 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 01:38:22.234043 1012781 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 01:38:22.251472 1012781 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 01:38:22.547020 1012781 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 01:38:22.615657 1012781 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 01:38:22.616290 1012781 kubeadm.go:319] 
	I1208 01:38:22.616376 1012781 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 01:38:22.616391 1012781 kubeadm.go:319] 
	I1208 01:38:22.616495 1012781 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 01:38:22.616504 1012781 kubeadm.go:319] 
	I1208 01:38:22.616529 1012781 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 01:38:22.616601 1012781 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 01:38:22.616658 1012781 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 01:38:22.616666 1012781 kubeadm.go:319] 
	I1208 01:38:22.616724 1012781 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 01:38:22.616732 1012781 kubeadm.go:319] 
	I1208 01:38:22.616783 1012781 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 01:38:22.616791 1012781 kubeadm.go:319] 
	I1208 01:38:22.616853 1012781 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 01:38:22.616932 1012781 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 01:38:22.617026 1012781 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 01:38:22.617037 1012781 kubeadm.go:319] 
	I1208 01:38:22.617132 1012781 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 01:38:22.617226 1012781 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 01:38:22.617234 1012781 kubeadm.go:319] 
	I1208 01:38:22.617318 1012781 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f72ocj.tdr22b7g3l2lq5dd \
	I1208 01:38:22.617434 1012781 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 01:38:22.617458 1012781 kubeadm.go:319] 	--control-plane 
	I1208 01:38:22.617466 1012781 kubeadm.go:319] 
	I1208 01:38:22.617560 1012781 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 01:38:22.617569 1012781 kubeadm.go:319] 
	I1208 01:38:22.617663 1012781 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f72ocj.tdr22b7g3l2lq5dd \
	I1208 01:38:22.617773 1012781 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 01:38:22.635889 1012781 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:38:22.636026 1012781 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:38:22.636068 1012781 cni.go:84] Creating CNI manager for ""
	I1208 01:38:22.636091 1012781 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:38:22.640010 1012781 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 01:38:22.643029 1012781 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 01:38:22.647814 1012781 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1208 01:38:22.647839 1012781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 01:38:22.674966 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 01:38:23.780677 1012781 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.105672639s)
	I1208 01:38:23.780724 1012781 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 01:38:23.780852 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:23.780930 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-661561 minikube.k8s.io/updated_at=2025_12_08T01_38_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=old-k8s-version-661561 minikube.k8s.io/primary=true
	I1208 01:38:23.980302 1012781 ops.go:34] apiserver oom_adj: -16
	I1208 01:38:23.980437 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:24.480966 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:24.980506 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:25.480952 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:25.981007 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:26.480687 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:26.981500 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:27.481020 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:27.981146 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:28.481399 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:28.981513 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:29.481491 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:29.981299 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:30.480590 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:30.981063 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:31.481178 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:31.981405 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:32.480639 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:32.981391 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:33.480484 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:33.980797 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:34.481033 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:34.981306 1012781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:38:35.130082 1012781 kubeadm.go:1114] duration metric: took 11.349276962s to wait for elevateKubeSystemPrivileges
	I1208 01:38:35.130116 1012781 kubeadm.go:403] duration metric: took 27.72305724s to StartCluster
	I1208 01:38:35.130134 1012781 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:35.130195 1012781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:38:35.131189 1012781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:38:35.131418 1012781 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:38:35.131538 1012781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 01:38:35.131821 1012781 config.go:182] Loaded profile config "old-k8s-version-661561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1208 01:38:35.131879 1012781 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:38:35.131948 1012781 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-661561"
	I1208 01:38:35.131966 1012781 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-661561"
	I1208 01:38:35.131992 1012781 host.go:66] Checking if "old-k8s-version-661561" exists ...
	I1208 01:38:35.132562 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:38:35.133069 1012781 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-661561"
	I1208 01:38:35.133093 1012781 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-661561"
	I1208 01:38:35.133353 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:38:35.136330 1012781 out.go:179] * Verifying Kubernetes components...
	I1208 01:38:35.139074 1012781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:38:35.170091 1012781 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-661561"
	I1208 01:38:35.170134 1012781 host.go:66] Checking if "old-k8s-version-661561" exists ...
	I1208 01:38:35.170597 1012781 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:38:35.180738 1012781 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:38:35.182962 1012781 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:38:35.182984 1012781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:38:35.183046 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:35.217816 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:35.222985 1012781 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:38:35.223006 1012781 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:38:35.223082 1012781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:38:35.258770 1012781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33772 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:38:35.495718 1012781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:38:35.581892 1012781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 01:38:35.582109 1012781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:38:35.585573 1012781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:38:36.714915 1012781 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.132752172s)
	I1208 01:38:36.715851 1012781 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-661561" to be "Ready" ...
	I1208 01:38:36.716201 1012781 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.134229072s)
	I1208 01:38:36.716229 1012781 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1208 01:38:36.717305 1012781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.131662436s)
	I1208 01:38:36.718892 1012781 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.223095331s)
	I1208 01:38:36.770187 1012781 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1208 01:38:36.773653 1012781 addons.go:530] duration metric: took 1.641767436s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1208 01:38:37.221379 1012781 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-661561" context rescaled to 1 replicas
	W1208 01:38:38.724749 1012781 node_ready.go:57] node "old-k8s-version-661561" has "Ready":"False" status (will retry)
	W1208 01:38:41.219654 1012781 node_ready.go:57] node "old-k8s-version-661561" has "Ready":"False" status (will retry)
	W1208 01:38:43.719670 1012781 node_ready.go:57] node "old-k8s-version-661561" has "Ready":"False" status (will retry)
	W1208 01:38:46.218975 1012781 node_ready.go:57] node "old-k8s-version-661561" has "Ready":"False" status (will retry)
	W1208 01:38:48.219106 1012781 node_ready.go:57] node "old-k8s-version-661561" has "Ready":"False" status (will retry)
	I1208 01:38:49.719375 1012781 node_ready.go:49] node "old-k8s-version-661561" is "Ready"
	I1208 01:38:49.719401 1012781 node_ready.go:38] duration metric: took 13.003522506s for node "old-k8s-version-661561" to be "Ready" ...
	I1208 01:38:49.719414 1012781 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:38:49.719475 1012781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:38:49.737770 1012781 api_server.go:72] duration metric: took 14.606313135s to wait for apiserver process to appear ...
	I1208 01:38:49.737793 1012781 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:38:49.737814 1012781 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1208 01:38:49.746598 1012781 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1208 01:38:49.748270 1012781 api_server.go:141] control plane version: v1.28.0
	I1208 01:38:49.748298 1012781 api_server.go:131] duration metric: took 10.497379ms to wait for apiserver health ...
	I1208 01:38:49.748318 1012781 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:38:49.752298 1012781 system_pods.go:59] 8 kube-system pods found
	I1208 01:38:49.752331 1012781 system_pods.go:61] "coredns-5dd5756b68-xxvjs" [c84c7ea3-5cfe-4b51-8ecc-4ae685979421] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:38:49.752337 1012781 system_pods.go:61] "etcd-old-k8s-version-661561" [dbc43c9a-b5a3-42dd-ae59-b2125a277983] Running
	I1208 01:38:49.752342 1012781 system_pods.go:61] "kindnet-9jp8g" [e0fbe6c7-ca22-4bab-be07-f045aeed304c] Running
	I1208 01:38:49.752346 1012781 system_pods.go:61] "kube-apiserver-old-k8s-version-661561" [908b1140-c8a0-479c-9c16-9866dfd5cea7] Running
	I1208 01:38:49.752350 1012781 system_pods.go:61] "kube-controller-manager-old-k8s-version-661561" [008ad459-4608-4a91-9512-21f2a7f6cfa8] Running
	I1208 01:38:49.752354 1012781 system_pods.go:61] "kube-proxy-c9bhh" [073ff9de-ffe4-4516-85e7-896806ec173b] Running
	I1208 01:38:49.752357 1012781 system_pods.go:61] "kube-scheduler-old-k8s-version-661561" [f01daccd-291d-4abe-9768-bddd37fc013f] Running
	I1208 01:38:49.752363 1012781 system_pods.go:61] "storage-provisioner" [a060a4ca-7a53-4438-8766-27c6b138d922] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:38:49.752370 1012781 system_pods.go:74] duration metric: took 4.043403ms to wait for pod list to return data ...
	I1208 01:38:49.752378 1012781 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:38:49.755104 1012781 default_sa.go:45] found service account: "default"
	I1208 01:38:49.755132 1012781 default_sa.go:55] duration metric: took 2.742941ms for default service account to be created ...
	I1208 01:38:49.755141 1012781 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:38:49.759584 1012781 system_pods.go:86] 8 kube-system pods found
	I1208 01:38:49.759613 1012781 system_pods.go:89] "coredns-5dd5756b68-xxvjs" [c84c7ea3-5cfe-4b51-8ecc-4ae685979421] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:38:49.759621 1012781 system_pods.go:89] "etcd-old-k8s-version-661561" [dbc43c9a-b5a3-42dd-ae59-b2125a277983] Running
	I1208 01:38:49.759627 1012781 system_pods.go:89] "kindnet-9jp8g" [e0fbe6c7-ca22-4bab-be07-f045aeed304c] Running
	I1208 01:38:49.759632 1012781 system_pods.go:89] "kube-apiserver-old-k8s-version-661561" [908b1140-c8a0-479c-9c16-9866dfd5cea7] Running
	I1208 01:38:49.759636 1012781 system_pods.go:89] "kube-controller-manager-old-k8s-version-661561" [008ad459-4608-4a91-9512-21f2a7f6cfa8] Running
	I1208 01:38:49.759640 1012781 system_pods.go:89] "kube-proxy-c9bhh" [073ff9de-ffe4-4516-85e7-896806ec173b] Running
	I1208 01:38:49.759644 1012781 system_pods.go:89] "kube-scheduler-old-k8s-version-661561" [f01daccd-291d-4abe-9768-bddd37fc013f] Running
	I1208 01:38:49.759650 1012781 system_pods.go:89] "storage-provisioner" [a060a4ca-7a53-4438-8766-27c6b138d922] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:38:49.759674 1012781 retry.go:31] will retry after 211.340479ms: missing components: kube-dns
	I1208 01:38:49.977143 1012781 system_pods.go:86] 8 kube-system pods found
	I1208 01:38:49.977228 1012781 system_pods.go:89] "coredns-5dd5756b68-xxvjs" [c84c7ea3-5cfe-4b51-8ecc-4ae685979421] Running
	I1208 01:38:49.977250 1012781 system_pods.go:89] "etcd-old-k8s-version-661561" [dbc43c9a-b5a3-42dd-ae59-b2125a277983] Running
	I1208 01:38:49.977269 1012781 system_pods.go:89] "kindnet-9jp8g" [e0fbe6c7-ca22-4bab-be07-f045aeed304c] Running
	I1208 01:38:49.977302 1012781 system_pods.go:89] "kube-apiserver-old-k8s-version-661561" [908b1140-c8a0-479c-9c16-9866dfd5cea7] Running
	I1208 01:38:49.977326 1012781 system_pods.go:89] "kube-controller-manager-old-k8s-version-661561" [008ad459-4608-4a91-9512-21f2a7f6cfa8] Running
	I1208 01:38:49.977344 1012781 system_pods.go:89] "kube-proxy-c9bhh" [073ff9de-ffe4-4516-85e7-896806ec173b] Running
	I1208 01:38:49.977362 1012781 system_pods.go:89] "kube-scheduler-old-k8s-version-661561" [f01daccd-291d-4abe-9768-bddd37fc013f] Running
	I1208 01:38:49.977391 1012781 system_pods.go:89] "storage-provisioner" [a060a4ca-7a53-4438-8766-27c6b138d922] Running
	I1208 01:38:49.977417 1012781 system_pods.go:126] duration metric: took 222.268823ms to wait for k8s-apps to be running ...
	I1208 01:38:49.977439 1012781 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:38:49.977520 1012781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:38:49.992353 1012781 system_svc.go:56] duration metric: took 14.905809ms WaitForService to wait for kubelet
	I1208 01:38:49.992379 1012781 kubeadm.go:587] duration metric: took 14.860928981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:38:49.992399 1012781 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:38:49.995286 1012781 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:38:49.995313 1012781 node_conditions.go:123] node cpu capacity is 2
	I1208 01:38:49.995325 1012781 node_conditions.go:105] duration metric: took 2.921741ms to run NodePressure ...
	I1208 01:38:49.995338 1012781 start.go:242] waiting for startup goroutines ...
	I1208 01:38:49.995345 1012781 start.go:247] waiting for cluster config update ...
	I1208 01:38:49.995356 1012781 start.go:256] writing updated cluster config ...
	I1208 01:38:49.995633 1012781 ssh_runner.go:195] Run: rm -f paused
	I1208 01:38:50.006051 1012781 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:38:50.012712 1012781 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xxvjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.018681 1012781 pod_ready.go:94] pod "coredns-5dd5756b68-xxvjs" is "Ready"
	I1208 01:38:50.018720 1012781 pod_ready.go:86] duration metric: took 5.923327ms for pod "coredns-5dd5756b68-xxvjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.022565 1012781 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.028524 1012781 pod_ready.go:94] pod "etcd-old-k8s-version-661561" is "Ready"
	I1208 01:38:50.028554 1012781 pod_ready.go:86] duration metric: took 5.956599ms for pod "etcd-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.032064 1012781 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.042783 1012781 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-661561" is "Ready"
	I1208 01:38:50.042813 1012781 pod_ready.go:86] duration metric: took 10.668278ms for pod "kube-apiserver-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.052970 1012781 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.410648 1012781 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-661561" is "Ready"
	I1208 01:38:50.410676 1012781 pod_ready.go:86] duration metric: took 357.679322ms for pod "kube-controller-manager-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:50.611601 1012781 pod_ready.go:83] waiting for pod "kube-proxy-c9bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:51.011745 1012781 pod_ready.go:94] pod "kube-proxy-c9bhh" is "Ready"
	I1208 01:38:51.011831 1012781 pod_ready.go:86] duration metric: took 400.199324ms for pod "kube-proxy-c9bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:51.211522 1012781 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:51.610819 1012781 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-661561" is "Ready"
	I1208 01:38:51.610873 1012781 pod_ready.go:86] duration metric: took 399.323259ms for pod "kube-scheduler-old-k8s-version-661561" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:38:51.610913 1012781 pod_ready.go:40] duration metric: took 1.604731774s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:38:51.669541 1012781 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1208 01:38:51.674833 1012781 out.go:203] 
	W1208 01:38:51.679551 1012781 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1208 01:38:51.683855 1012781 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1208 01:38:51.689877 1012781 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-661561" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:38:49 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:49.65655248Z" level=info msg="Created container 5f550170085a3ead093be1746fbaf33254891a19aa13ecb693d7fe4042fcf23c: kube-system/coredns-5dd5756b68-xxvjs/coredns" id=026f6e41-8855-45f8-89e4-ca39f32ca0b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:38:49 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:49.666673247Z" level=info msg="Starting container: 5f550170085a3ead093be1746fbaf33254891a19aa13ecb693d7fe4042fcf23c" id=275c21f3-061d-43c2-8080-33bbee8c973a name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:38:49 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:49.668652692Z" level=info msg="Started container" PID=1934 containerID=5f550170085a3ead093be1746fbaf33254891a19aa13ecb693d7fe4042fcf23c description=kube-system/coredns-5dd5756b68-xxvjs/coredns id=275c21f3-061d-43c2-8080-33bbee8c973a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6e61ace367f85128942f7f98f058227823d6a7a7f81a2da17402066221d4777
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.220948829Z" level=info msg="Running pod sandbox: default/busybox/POD" id=74bd557c-ad94-4751-8a3b-1fb18cac72c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.221018845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.230683574Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d3dfed86987e62d02e6ca631564d62d444452a589911950a6ac9bca47aedaa0d UID:d38f2f89-9cb5-463f-96c6-e17dab365206 NetNS:/var/run/netns/9792ff9f-a900-4e95-9c84-5927f4eea6d4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078c38}] Aliases:map[]}"
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.23072538Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.242352626Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d3dfed86987e62d02e6ca631564d62d444452a589911950a6ac9bca47aedaa0d UID:d38f2f89-9cb5-463f-96c6-e17dab365206 NetNS:/var/run/netns/9792ff9f-a900-4e95-9c84-5927f4eea6d4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078c38}] Aliases:map[]}"
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.242519069Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.246705096Z" level=info msg="Ran pod sandbox d3dfed86987e62d02e6ca631564d62d444452a589911950a6ac9bca47aedaa0d with infra container: default/busybox/POD" id=74bd557c-ad94-4751-8a3b-1fb18cac72c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.248899822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc0bf442-7008-4d21-a6e6-6e8344ed8985 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.249021013Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dc0bf442-7008-4d21-a6e6-6e8344ed8985 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.249059282Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dc0bf442-7008-4d21-a6e6-6e8344ed8985 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.251108849Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19742702-a3bc-474f-8d1b-e589331c841d name=/runtime.v1.ImageService/PullImage
	Dec 08 01:38:52 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:52.253479904Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.33982994Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=19742702-a3bc-474f-8d1b-e589331c841d name=/runtime.v1.ImageService/PullImage
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.341062414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4141a7e5-297b-441f-97d3-22431c982258 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.343839817Z" level=info msg="Creating container: default/busybox/busybox" id=1dbd8fa4-35c5-4684-88a7-d2ce7f0f339c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.344111443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.349026389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.349646089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.365188385Z" level=info msg="Created container 31f8ca7b15cf92738d3b6064064c9a52c756198e57d8d0d2e0a9ee2a3ba96dbf: default/busybox/busybox" id=1dbd8fa4-35c5-4684-88a7-d2ce7f0f339c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.366991179Z" level=info msg="Starting container: 31f8ca7b15cf92738d3b6064064c9a52c756198e57d8d0d2e0a9ee2a3ba96dbf" id=e88b9a13-8bed-45f4-945e-98c3a814f4a8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:38:54 old-k8s-version-661561 crio[837]: time="2025-12-08T01:38:54.369655629Z" level=info msg="Started container" PID=1988 containerID=31f8ca7b15cf92738d3b6064064c9a52c756198e57d8d0d2e0a9ee2a3ba96dbf description=default/busybox/busybox id=e88b9a13-8bed-45f4-945e-98c3a814f4a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3dfed86987e62d02e6ca631564d62d444452a589911950a6ac9bca47aedaa0d
	Dec 08 01:39:01 old-k8s-version-661561 crio[837]: time="2025-12-08T01:39:01.097466966Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	31f8ca7b15cf9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   d3dfed86987e6       busybox                                          default
	5f550170085a3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   f6e61ace367f8       coredns-5dd5756b68-xxvjs                         kube-system
	33992aff6e27f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   aae3783fe0bb0       storage-provisioner                              kube-system
	faf61c44faf93       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   492e1957659e1       kindnet-9jp8g                                    kube-system
	b0aa7ac47b309       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   ffd44d29cc20c       kube-proxy-c9bhh                                 kube-system
	c37fc15e90cd6       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   8aec44a43499a       kube-scheduler-old-k8s-version-661561            kube-system
	4cb5e4d7ba501       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   3fc646886dd43       kube-controller-manager-old-k8s-version-661561   kube-system
	78815c161d6f3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   c4e11920a9010       kube-apiserver-old-k8s-version-661561            kube-system
	b3d073fae5c67       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   04cf8879353f3       etcd-old-k8s-version-661561                      kube-system
	
	
	==> coredns [5f550170085a3ead093be1746fbaf33254891a19aa13ecb693d7fe4042fcf23c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60269 - 13307 "HINFO IN 897911940289591209.8743103783027689374. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.037792864s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-661561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-661561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-661561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_38_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:38:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-661561
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:38:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:38:53 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:38:53 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:38:53 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:38:53 +0000   Mon, 08 Dec 2025 01:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-661561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                bbb3bea3-db6a-4a1e-9c0a-2e379053e90a
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-xxvjs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-661561                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-9jp8g                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-661561             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-661561    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-c9bhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-661561             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-661561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-661561 event: Registered Node old-k8s-version-661561 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-661561 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 8 01:03] overlayfs: idmapped layers are currently not supported
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b3d073fae5c67f0216c38ba40a0cbe1d55073d257ae361c645327d240e34e408] <==
	{"level":"info","ts":"2025-12-08T01:38:15.498249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-08T01:38:15.498658Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-08T01:38:15.507406Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-08T01:38:15.507734Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-08T01:38:15.507551Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:38:15.508272Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:38:15.508199Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-08T01:38:15.767008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-08T01:38:15.767114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-08T01:38:15.767152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-08T01:38:15.767191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-08T01:38:15.767221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-08T01:38:15.767253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-08T01:38:15.767283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-08T01:38:15.768647Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:38:15.771851Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-661561 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-08T01:38:15.771921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:38:15.772727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:38:15.774916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:38:15.774976Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:38:15.775122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:38:15.778974Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-08T01:38:15.779101Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-08T01:38:15.779136Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-08T01:38:15.793225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 01:39:02 up  6:21,  0 user,  load average: 2.26, 2.54, 2.10
	Linux old-k8s-version-661561 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [faf61c44faf939526a248ee63c2a53edba2aabecec6bb2af2b35b9579d479f66] <==
	I1208 01:38:38.823296       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:38:38.823528       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1208 01:38:38.823661       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:38:38.823678       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:38:38.823692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:38:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:38:39.120097       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:38:39.120123       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:38:39.120132       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:38:39.120824       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1208 01:38:39.420551       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:38:39.420715       1 metrics.go:72] Registering metrics
	I1208 01:38:39.420777       1 controller.go:711] "Syncing nftables rules"
	I1208 01:38:49.123901       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:38:49.123953       1 main.go:301] handling current node
	I1208 01:38:59.122948       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:38:59.122981       1 main.go:301] handling current node
	
	
	==> kube-apiserver [78815c161d6f3a23b4503f3ee9c734d1e5870dd2258d1b5022f0402ccfb607f4] <==
	I1208 01:38:19.587075       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1208 01:38:19.587115       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1208 01:38:19.587485       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:38:19.587545       1 aggregator.go:166] initial CRD sync complete...
	I1208 01:38:19.587552       1 autoregister_controller.go:141] Starting autoregister controller
	I1208 01:38:19.587557       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:38:19.587561       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:38:19.591363       1 controller.go:624] quota admission added evaluator for: namespaces
	E1208 01:38:19.612028       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1208 01:38:19.815377       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:38:20.285358       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1208 01:38:20.290409       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1208 01:38:20.290534       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:38:20.909415       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:38:21.009588       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:38:21.119920       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1208 01:38:21.131391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1208 01:38:21.133070       1 controller.go:624] quota admission added evaluator for: endpoints
	I1208 01:38:21.141531       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 01:38:21.465318       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1208 01:38:22.530991       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1208 01:38:22.545485       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1208 01:38:22.560091       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1208 01:38:35.145714       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1208 01:38:35.298767       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4cb5e4d7ba501f03caee5a249cf3fad2c1eaa53052047679b14ea70ff529c7c0] <==
	I1208 01:38:34.516140       1 shared_informer.go:318] Caches are synced for daemon sets
	I1208 01:38:34.524436       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 01:38:34.526641       1 shared_informer.go:318] Caches are synced for stateful set
	I1208 01:38:34.946461       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:38:34.968333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:38:34.968370       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1208 01:38:35.157091       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1208 01:38:35.349153       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9jp8g"
	I1208 01:38:35.364890       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c9bhh"
	I1208 01:38:35.411100       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tkqk2"
	I1208 01:38:35.437086       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xxvjs"
	I1208 01:38:35.467298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="311.14329ms"
	I1208 01:38:35.533916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.622514ms"
	I1208 01:38:35.534044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.842µs"
	I1208 01:38:36.786040       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1208 01:38:36.816181       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-tkqk2"
	I1208 01:38:36.837903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.756344ms"
	I1208 01:38:36.853721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.771863ms"
	I1208 01:38:36.854216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.995µs"
	I1208 01:38:49.273679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.534µs"
	I1208 01:38:49.300124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.934µs"
	I1208 01:38:49.421614       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1208 01:38:49.903213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.949754ms"
	I1208 01:38:49.959880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.694303ms"
	I1208 01:38:49.960068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.718µs"
	
	
	==> kube-proxy [b0aa7ac47b30918b84d9714343989d0ee0b91beeb4e82c849d0923ed3ca26a11] <==
	I1208 01:38:35.917560       1 server_others.go:69] "Using iptables proxy"
	I1208 01:38:35.937367       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1208 01:38:36.053055       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:38:36.055155       1 server_others.go:152] "Using iptables Proxier"
	I1208 01:38:36.055193       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 01:38:36.055201       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 01:38:36.055222       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 01:38:36.055491       1 server.go:846] "Version info" version="v1.28.0"
	I1208 01:38:36.055504       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:38:36.062459       1 config.go:188] "Starting service config controller"
	I1208 01:38:36.062486       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 01:38:36.062497       1 config.go:97] "Starting endpoint slice config controller"
	I1208 01:38:36.062500       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 01:38:36.064221       1 config.go:315] "Starting node config controller"
	I1208 01:38:36.064237       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 01:38:36.164590       1 shared_informer.go:318] Caches are synced for service config
	I1208 01:38:36.166590       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1208 01:38:36.166796       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c37fc15e90cd6a28b79f715e67d83e313e26e358e4010cc4771c384247b89201] <==
	W1208 01:38:19.537263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1208 01:38:19.537445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1208 01:38:19.537331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 01:38:19.537505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 01:38:19.537374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 01:38:19.537567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1208 01:38:19.539106       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 01:38:19.539187       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 01:38:20.375537       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 01:38:20.375657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 01:38:20.470618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1208 01:38:20.470719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1208 01:38:20.524479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1208 01:38:20.524591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1208 01:38:20.580538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1208 01:38:20.580584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1208 01:38:20.596038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1208 01:38:20.596149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1208 01:38:20.648144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 01:38:20.648266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 01:38:20.702071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 01:38:20.702202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1208 01:38:20.829971       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 01:38:20.830014       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1208 01:38:23.615757       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.399213    1373 topology_manager.go:215] "Topology Admit Handler" podUID="073ff9de-ffe4-4516-85e7-896806ec173b" podNamespace="kube-system" podName="kube-proxy-c9bhh"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440364    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073ff9de-ffe4-4516-85e7-896806ec173b-lib-modules\") pod \"kube-proxy-c9bhh\" (UID: \"073ff9de-ffe4-4516-85e7-896806ec173b\") " pod="kube-system/kube-proxy-c9bhh"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440421    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073ff9de-ffe4-4516-85e7-896806ec173b-xtables-lock\") pod \"kube-proxy-c9bhh\" (UID: \"073ff9de-ffe4-4516-85e7-896806ec173b\") " pod="kube-system/kube-proxy-c9bhh"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440448    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e0fbe6c7-ca22-4bab-be07-f045aeed304c-cni-cfg\") pod \"kindnet-9jp8g\" (UID: \"e0fbe6c7-ca22-4bab-be07-f045aeed304c\") " pod="kube-system/kindnet-9jp8g"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440524    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0fbe6c7-ca22-4bab-be07-f045aeed304c-xtables-lock\") pod \"kindnet-9jp8g\" (UID: \"e0fbe6c7-ca22-4bab-be07-f045aeed304c\") " pod="kube-system/kindnet-9jp8g"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440569    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0fbe6c7-ca22-4bab-be07-f045aeed304c-lib-modules\") pod \"kindnet-9jp8g\" (UID: \"e0fbe6c7-ca22-4bab-be07-f045aeed304c\") " pod="kube-system/kindnet-9jp8g"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440615    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbk2t\" (UniqueName: \"kubernetes.io/projected/e0fbe6c7-ca22-4bab-be07-f045aeed304c-kube-api-access-wbk2t\") pod \"kindnet-9jp8g\" (UID: \"e0fbe6c7-ca22-4bab-be07-f045aeed304c\") " pod="kube-system/kindnet-9jp8g"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440651    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztvs\" (UniqueName: \"kubernetes.io/projected/073ff9de-ffe4-4516-85e7-896806ec173b-kube-api-access-wztvs\") pod \"kube-proxy-c9bhh\" (UID: \"073ff9de-ffe4-4516-85e7-896806ec173b\") " pod="kube-system/kube-proxy-c9bhh"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: I1208 01:38:35.440676    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/073ff9de-ffe4-4516-85e7-896806ec173b-kube-proxy\") pod \"kube-proxy-c9bhh\" (UID: \"073ff9de-ffe4-4516-85e7-896806ec173b\") " pod="kube-system/kube-proxy-c9bhh"
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: W1208 01:38:35.709176    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/crio-492e1957659e17d9d7a108bd699165e6f9400ce7b00c3a8a6403cdd588b63a2d WatchSource:0}: Error finding container 492e1957659e17d9d7a108bd699165e6f9400ce7b00c3a8a6403cdd588b63a2d: Status 404 returned error can't find the container with id 492e1957659e17d9d7a108bd699165e6f9400ce7b00c3a8a6403cdd588b63a2d
	Dec 08 01:38:35 old-k8s-version-661561 kubelet[1373]: W1208 01:38:35.728731    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/crio-ffd44d29cc20c1125443e34fc7f3f10619fb90e4627a7bbec21f5b5c69667772 WatchSource:0}: Error finding container ffd44d29cc20c1125443e34fc7f3f10619fb90e4627a7bbec21f5b5c69667772: Status 404 returned error can't find the container with id ffd44d29cc20c1125443e34fc7f3f10619fb90e4627a7bbec21f5b5c69667772
	Dec 08 01:38:38 old-k8s-version-661561 kubelet[1373]: I1208 01:38:38.870803    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9jp8g" podStartSLOduration=0.941670115 podCreationTimestamp="2025-12-08 01:38:35 +0000 UTC" firstStartedPulling="2025-12-08 01:38:35.715763636 +0000 UTC m=+13.250013854" lastFinishedPulling="2025-12-08 01:38:38.644833404 +0000 UTC m=+16.179083622" observedRunningTime="2025-12-08 01:38:38.870480852 +0000 UTC m=+16.404731070" watchObservedRunningTime="2025-12-08 01:38:38.870739883 +0000 UTC m=+16.404990100"
	Dec 08 01:38:38 old-k8s-version-661561 kubelet[1373]: I1208 01:38:38.871009    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c9bhh" podStartSLOduration=3.870986614 podCreationTimestamp="2025-12-08 01:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:38:36.874796217 +0000 UTC m=+14.409046443" watchObservedRunningTime="2025-12-08 01:38:38.870986614 +0000 UTC m=+16.405236840"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.236076    1373 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.266534    1373 topology_manager.go:215] "Topology Admit Handler" podUID="a060a4ca-7a53-4438-8766-27c6b138d922" podNamespace="kube-system" podName="storage-provisioner"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.271783    1373 topology_manager.go:215] "Topology Admit Handler" podUID="c84c7ea3-5cfe-4b51-8ecc-4ae685979421" podNamespace="kube-system" podName="coredns-5dd5756b68-xxvjs"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.371295    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a060a4ca-7a53-4438-8766-27c6b138d922-tmp\") pod \"storage-provisioner\" (UID: \"a060a4ca-7a53-4438-8766-27c6b138d922\") " pod="kube-system/storage-provisioner"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.371375    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c84c7ea3-5cfe-4b51-8ecc-4ae685979421-config-volume\") pod \"coredns-5dd5756b68-xxvjs\" (UID: \"c84c7ea3-5cfe-4b51-8ecc-4ae685979421\") " pod="kube-system/coredns-5dd5756b68-xxvjs"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.371416    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k765l\" (UniqueName: \"kubernetes.io/projected/c84c7ea3-5cfe-4b51-8ecc-4ae685979421-kube-api-access-k765l\") pod \"coredns-5dd5756b68-xxvjs\" (UID: \"c84c7ea3-5cfe-4b51-8ecc-4ae685979421\") " pod="kube-system/coredns-5dd5756b68-xxvjs"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.371454    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6df4\" (UniqueName: \"kubernetes.io/projected/a060a4ca-7a53-4438-8766-27c6b138d922-kube-api-access-g6df4\") pod \"storage-provisioner\" (UID: \"a060a4ca-7a53-4438-8766-27c6b138d922\") " pod="kube-system/storage-provisioner"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: W1208 01:38:49.619701    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/crio-f6e61ace367f85128942f7f98f058227823d6a7a7f81a2da17402066221d4777 WatchSource:0}: Error finding container f6e61ace367f85128942f7f98f058227823d6a7a7f81a2da17402066221d4777: Status 404 returned error can't find the container with id f6e61ace367f85128942f7f98f058227823d6a7a7f81a2da17402066221d4777
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.918736    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xxvjs" podStartSLOduration=14.918692838 podCreationTimestamp="2025-12-08 01:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:38:49.900719777 +0000 UTC m=+27.434969994" watchObservedRunningTime="2025-12-08 01:38:49.918692838 +0000 UTC m=+27.452943064"
	Dec 08 01:38:49 old-k8s-version-661561 kubelet[1373]: I1208 01:38:49.938887    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.938750888 podCreationTimestamp="2025-12-08 01:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:38:49.923024352 +0000 UTC m=+27.457274578" watchObservedRunningTime="2025-12-08 01:38:49.938750888 +0000 UTC m=+27.473001114"
	Dec 08 01:38:51 old-k8s-version-661561 kubelet[1373]: I1208 01:38:51.918734    1373 topology_manager.go:215] "Topology Admit Handler" podUID="d38f2f89-9cb5-463f-96c6-e17dab365206" podNamespace="default" podName="busybox"
	Dec 08 01:38:51 old-k8s-version-661561 kubelet[1373]: I1208 01:38:51.986487    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhck8\" (UniqueName: \"kubernetes.io/projected/d38f2f89-9cb5-463f-96c6-e17dab365206-kube-api-access-hhck8\") pod \"busybox\" (UID: \"d38f2f89-9cb5-463f-96c6-e17dab365206\") " pod="default/busybox"
	
	
	==> storage-provisioner [33992aff6e27fda5195fbd7477cf24eff66b0d6de727b5797bca63356a379cec] <==
	I1208 01:38:49.656561       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:38:49.681455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:38:49.681565       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 01:38:49.691474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:38:49.693838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_76a8afef-af8d-4133-bf4c-8fc061ba8ff0!
	I1208 01:38:49.700600       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"986e9fb0-2e44-4a3d-b9f2-86404551ac54", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-661561_76a8afef-af8d-4133-bf4c-8fc061ba8ff0 became leader
	I1208 01:38:49.794221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_76a8afef-af8d-4133-bf4c-8fc061ba8ff0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-661561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-661561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-661561 --alsologtostderr -v=1: exit status 80 (2.252522478s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-661561 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:40:18.157989 1019057 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:40:18.158209 1019057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:18.158237 1019057 out.go:374] Setting ErrFile to fd 2...
	I1208 01:40:18.158257 1019057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:18.158562 1019057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:40:18.158886 1019057 out.go:368] Setting JSON to false
	I1208 01:40:18.158945 1019057 mustload.go:66] Loading cluster: old-k8s-version-661561
	I1208 01:40:18.159411 1019057 config.go:182] Loaded profile config "old-k8s-version-661561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1208 01:40:18.159959 1019057 cli_runner.go:164] Run: docker container inspect old-k8s-version-661561 --format={{.State.Status}}
	I1208 01:40:18.180315 1019057 host.go:66] Checking if "old-k8s-version-661561" exists ...
	I1208 01:40:18.180642 1019057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:18.279901 1019057 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-08 01:40:18.270242378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:18.280535 1019057 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-661561 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1208 01:40:18.283782 1019057 out.go:179] * Pausing node old-k8s-version-661561 ... 
	I1208 01:40:18.287314 1019057 host.go:66] Checking if "old-k8s-version-661561" exists ...
	I1208 01:40:18.287659 1019057 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:18.287699 1019057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-661561
	I1208 01:40:18.310964 1019057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33777 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/old-k8s-version-661561/id_rsa Username:docker}
	I1208 01:40:18.417849 1019057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:40:18.441403 1019057 pause.go:52] kubelet running: true
	I1208 01:40:18.441468 1019057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:40:18.710327 1019057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:40:18.710415 1019057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:40:18.819982 1019057 cri.go:89] found id: "252c3512ae3866f1479c8caddeae6aa2cc7b4ed75ae08c708c308767303721e6"
	I1208 01:40:18.820008 1019057 cri.go:89] found id: "519208bc470e9706bec61b9b5ac6968add358d9d73f601b1c1404beed17d739a"
	I1208 01:40:18.820013 1019057 cri.go:89] found id: "19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa"
	I1208 01:40:18.820017 1019057 cri.go:89] found id: "afc1a2d7ec80cd10fc94e723f0fa72658620a15e33de2ef0c6e7b721ae07d99b"
	I1208 01:40:18.820020 1019057 cri.go:89] found id: "e7bfc63787639175c63bb390408cb799223ab69316a20f1ef610c444265dae43"
	I1208 01:40:18.820029 1019057 cri.go:89] found id: "db3477f42c8b050631a028c9c177b4b3e9855d1200a8f4514f8d127b54fbcb3b"
	I1208 01:40:18.820032 1019057 cri.go:89] found id: "1e731418e7e9eb3ef33b29a3786cac63eb6d34337f3b85e70054f49effd66264"
	I1208 01:40:18.820035 1019057 cri.go:89] found id: "50b6126c143b75351adf2c3d4c08de132d5ab508f2efcfa73eecbbab003f4842"
	I1208 01:40:18.820038 1019057 cri.go:89] found id: "73dc2c8233cf3b38e74119af9ff7ac7f41e9b14c4ebe75ddf6ba7def29f90d74"
	I1208 01:40:18.820044 1019057 cri.go:89] found id: "44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	I1208 01:40:18.820047 1019057 cri.go:89] found id: "2919f0946ab1b27883a67e0a1d1f724f0c5c22dce6ff3b71fb09c7de4c9f2039"
	I1208 01:40:18.820050 1019057 cri.go:89] found id: ""
	I1208 01:40:18.820097 1019057 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:40:18.835174 1019057 retry.go:31] will retry after 356.287466ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:40:18Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:40:19.191695 1019057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:40:19.206120 1019057 pause.go:52] kubelet running: false
	I1208 01:40:19.206180 1019057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:40:19.406911 1019057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:40:19.407001 1019057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:40:19.500062 1019057 cri.go:89] found id: "252c3512ae3866f1479c8caddeae6aa2cc7b4ed75ae08c708c308767303721e6"
	I1208 01:40:19.500082 1019057 cri.go:89] found id: "519208bc470e9706bec61b9b5ac6968add358d9d73f601b1c1404beed17d739a"
	I1208 01:40:19.500086 1019057 cri.go:89] found id: "19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa"
	I1208 01:40:19.500090 1019057 cri.go:89] found id: "afc1a2d7ec80cd10fc94e723f0fa72658620a15e33de2ef0c6e7b721ae07d99b"
	I1208 01:40:19.500093 1019057 cri.go:89] found id: "e7bfc63787639175c63bb390408cb799223ab69316a20f1ef610c444265dae43"
	I1208 01:40:19.500097 1019057 cri.go:89] found id: "db3477f42c8b050631a028c9c177b4b3e9855d1200a8f4514f8d127b54fbcb3b"
	I1208 01:40:19.500100 1019057 cri.go:89] found id: "1e731418e7e9eb3ef33b29a3786cac63eb6d34337f3b85e70054f49effd66264"
	I1208 01:40:19.500103 1019057 cri.go:89] found id: "50b6126c143b75351adf2c3d4c08de132d5ab508f2efcfa73eecbbab003f4842"
	I1208 01:40:19.500106 1019057 cri.go:89] found id: "73dc2c8233cf3b38e74119af9ff7ac7f41e9b14c4ebe75ddf6ba7def29f90d74"
	I1208 01:40:19.500112 1019057 cri.go:89] found id: "44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	I1208 01:40:19.500115 1019057 cri.go:89] found id: "2919f0946ab1b27883a67e0a1d1f724f0c5c22dce6ff3b71fb09c7de4c9f2039"
	I1208 01:40:19.500118 1019057 cri.go:89] found id: ""
	I1208 01:40:19.500172 1019057 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:40:19.528592 1019057 retry.go:31] will retry after 220.100294ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:40:19Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:40:19.750816 1019057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:40:19.777941 1019057 pause.go:52] kubelet running: false
	I1208 01:40:19.778021 1019057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:40:20.167340 1019057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:40:20.167440 1019057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:40:20.290256 1019057 cri.go:89] found id: "252c3512ae3866f1479c8caddeae6aa2cc7b4ed75ae08c708c308767303721e6"
	I1208 01:40:20.290278 1019057 cri.go:89] found id: "519208bc470e9706bec61b9b5ac6968add358d9d73f601b1c1404beed17d739a"
	I1208 01:40:20.290282 1019057 cri.go:89] found id: "19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa"
	I1208 01:40:20.290286 1019057 cri.go:89] found id: "afc1a2d7ec80cd10fc94e723f0fa72658620a15e33de2ef0c6e7b721ae07d99b"
	I1208 01:40:20.290294 1019057 cri.go:89] found id: "e7bfc63787639175c63bb390408cb799223ab69316a20f1ef610c444265dae43"
	I1208 01:40:20.290298 1019057 cri.go:89] found id: "db3477f42c8b050631a028c9c177b4b3e9855d1200a8f4514f8d127b54fbcb3b"
	I1208 01:40:20.290301 1019057 cri.go:89] found id: "1e731418e7e9eb3ef33b29a3786cac63eb6d34337f3b85e70054f49effd66264"
	I1208 01:40:20.290304 1019057 cri.go:89] found id: "50b6126c143b75351adf2c3d4c08de132d5ab508f2efcfa73eecbbab003f4842"
	I1208 01:40:20.290307 1019057 cri.go:89] found id: "73dc2c8233cf3b38e74119af9ff7ac7f41e9b14c4ebe75ddf6ba7def29f90d74"
	I1208 01:40:20.290314 1019057 cri.go:89] found id: "44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	I1208 01:40:20.290317 1019057 cri.go:89] found id: "2919f0946ab1b27883a67e0a1d1f724f0c5c22dce6ff3b71fb09c7de4c9f2039"
	I1208 01:40:20.290320 1019057 cri.go:89] found id: ""
	I1208 01:40:20.290368 1019057 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:40:20.312282 1019057 out.go:203] 
	W1208 01:40:20.315317 1019057 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:40:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:40:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 01:40:20.315360 1019057 out.go:285] * 
	* 
	W1208 01:40:20.324951 1019057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:40:20.330113 1019057 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-661561 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-661561
helpers_test.go:243: (dbg) docker inspect old-k8s-version-661561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	        "Created": "2025-12-08T01:37:59.095493293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1016438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:39:16.334049128Z",
	            "FinishedAt": "2025-12-08T01:39:15.483975083Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hostname",
	        "HostsPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hosts",
	        "LogPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f-json.log",
	        "Name": "/old-k8s-version-661561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-661561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-661561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	                "LowerDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-661561",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-661561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-661561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f84ccefd247cc9dc93998f7705a385407cbf5d00ae0386e3d727308e1cee879b",
	            "SandboxKey": "/var/run/docker/netns/f84ccefd247c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33778"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33780"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-661561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:0e:32:1e:64:eb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f564ce91e5f1a7355aa0c3c6eaf3b409225f9ea728cbb26fa06f64c7acc7ac75",
	                    "EndpointID": "34cd7eab49e3b133961654f9c205a795e1f8c624fbd35c952cc28696aab491f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-661561",
	                        "bab08c504dac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561: exit status 2 (528.818126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25: (2.10626288s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-000739 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo containerd config dump                                                                                                                                                                                                  │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo crio config                                                                                                                                                                                                             │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ delete  │ -p cilium-000739                                                                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:40:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:40:09.711961 1018441 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:40:09.712072 1018441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:09.712075 1018441 out.go:374] Setting ErrFile to fd 2...
	I1208 01:40:09.712079 1018441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:09.712335 1018441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:40:09.712732 1018441 out.go:368] Setting JSON to false
	I1208 01:40:09.713796 1018441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22942,"bootTime":1765135068,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:40:09.713923 1018441 start.go:143] virtualization:  
	I1208 01:40:09.717494 1018441 out.go:179] * [cert-expiration-428091] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:40:09.720505 1018441 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:40:09.720622 1018441 notify.go:221] Checking for updates...
	I1208 01:40:09.726153 1018441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:40:09.729046 1018441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:40:09.731866 1018441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:40:09.734573 1018441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:40:09.737479 1018441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:40:09.740683 1018441 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:09.741343 1018441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:40:09.781904 1018441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:40:09.782017 1018441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:09.846996 1018441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:40:09.83687235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:09.847096 1018441 docker.go:319] overlay module found
	I1208 01:40:09.850228 1018441 out.go:179] * Using the docker driver based on existing profile
	I1208 01:40:09.853172 1018441 start.go:309] selected driver: docker
	I1208 01:40:09.853183 1018441 start.go:927] validating driver "docker" against &{Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:09.853292 1018441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:40:09.854057 1018441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:09.921734 1018441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:40:09.912394799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:09.922080 1018441 cni.go:84] Creating CNI manager for ""
	I1208 01:40:09.922146 1018441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:09.922189 1018441 start.go:353] cluster config:
	{Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:09.925290 1018441 out.go:179] * Starting "cert-expiration-428091" primary control-plane node in "cert-expiration-428091" cluster
	I1208 01:40:09.928065 1018441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:40:09.931041 1018441 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:40:09.934083 1018441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:09.934126 1018441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:40:09.934144 1018441 cache.go:65] Caching tarball of preloaded images
	I1208 01:40:09.934152 1018441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:40:09.934230 1018441 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:40:09.934239 1018441 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:40:09.934353 1018441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/config.json ...
	I1208 01:40:09.954943 1018441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:40:09.954954 1018441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:40:09.954975 1018441 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:40:09.955045 1018441 start.go:360] acquireMachinesLock for cert-expiration-428091: {Name:mk3e9ac88d28b3bf834f648d1ec918bdd8ecc323 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:09.955113 1018441 start.go:364] duration metric: took 50.487µs to acquireMachinesLock for "cert-expiration-428091"
	I1208 01:40:09.955133 1018441 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:40:09.955137 1018441 fix.go:54] fixHost starting: 
	I1208 01:40:09.955471 1018441 cli_runner.go:164] Run: docker container inspect cert-expiration-428091 --format={{.State.Status}}
	I1208 01:40:09.984729 1018441 fix.go:112] recreateIfNeeded on cert-expiration-428091: state=Running err=<nil>
	W1208 01:40:09.984761 1018441 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:40:09.988069 1018441 out.go:252] * Updating the running docker "cert-expiration-428091" container ...
	I1208 01:40:09.988119 1018441 machine.go:94] provisionDockerMachine start ...
	I1208 01:40:09.988200 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.016308 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.016669 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.016677 1018441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:40:10.176007 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-428091
	
	I1208 01:40:10.176021 1018441 ubuntu.go:182] provisioning hostname "cert-expiration-428091"
	I1208 01:40:10.176091 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.193807 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.194105 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.194113 1018441 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-428091 && echo "cert-expiration-428091" | sudo tee /etc/hostname
	I1208 01:40:10.361304 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-428091
	
	I1208 01:40:10.361370 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.379043 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.379354 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.379368 1018441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-428091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-428091/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-428091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:40:10.539547 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:40:10.539561 1018441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:40:10.539582 1018441 ubuntu.go:190] setting up certificates
	I1208 01:40:10.539600 1018441 provision.go:84] configureAuth start
	I1208 01:40:10.539661 1018441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-428091
	I1208 01:40:10.558675 1018441 provision.go:143] copyHostCerts
	I1208 01:40:10.558795 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:40:10.558805 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:40:10.558926 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:40:10.559056 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:40:10.559061 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:40:10.559088 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:40:10.559145 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:40:10.559148 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:40:10.559175 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:40:10.559227 1018441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-428091 san=[127.0.0.1 192.168.85.2 cert-expiration-428091 localhost minikube]
	I1208 01:40:10.897078 1018441 provision.go:177] copyRemoteCerts
	I1208 01:40:10.897131 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:40:10.897176 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.914896 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:11.025371 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1208 01:40:11.051942 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:40:11.073586 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:40:11.093593 1018441 provision.go:87] duration metric: took 553.971087ms to configureAuth
	I1208 01:40:11.093612 1018441 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:40:11.093798 1018441 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:11.093893 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:11.113369 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:11.113676 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:11.113687 1018441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:40:16.522427 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:40:16.522441 1018441 machine.go:97] duration metric: took 6.534315351s to provisionDockerMachine
	I1208 01:40:16.522452 1018441 start.go:293] postStartSetup for "cert-expiration-428091" (driver="docker")
	I1208 01:40:16.522462 1018441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:40:16.522542 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:40:16.522606 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.542936 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.651164 1018441 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:40:16.655835 1018441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:40:16.655853 1018441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:40:16.655863 1018441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:40:16.655926 1018441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:40:16.656022 1018441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:40:16.656127 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:40:16.670888 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:16.689391 1018441 start.go:296] duration metric: took 166.925734ms for postStartSetup
	I1208 01:40:16.689463 1018441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:40:16.689525 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.707043 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.812254 1018441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:40:16.817908 1018441 fix.go:56] duration metric: took 6.862763351s for fixHost
	I1208 01:40:16.817940 1018441 start.go:83] releasing machines lock for "cert-expiration-428091", held for 6.862803573s
	I1208 01:40:16.818014 1018441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-428091
	I1208 01:40:16.837122 1018441 ssh_runner.go:195] Run: cat /version.json
	I1208 01:40:16.837173 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.837190 1018441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:40:16.837295 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.864614 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.872956 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:17.069320 1018441 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:17.076189 1018441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:40:17.135070 1018441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:40:17.146374 1018441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:40:17.146435 1018441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:40:17.155718 1018441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:40:17.155731 1018441 start.go:496] detecting cgroup driver to use...
	I1208 01:40:17.155762 1018441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:40:17.155821 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:40:17.172158 1018441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:40:17.185722 1018441 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:40:17.185775 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:40:17.202176 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:40:17.215610 1018441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:40:17.357760 1018441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:40:17.511682 1018441 docker.go:234] disabling docker service ...
	I1208 01:40:17.511741 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:40:17.526989 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:40:17.540503 1018441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:40:17.698777 1018441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:40:17.890913 1018441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:40:17.909056 1018441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:40:17.925711 1018441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:40:17.925771 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.934837 1018441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:40:17.934914 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.945124 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.954924 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.965081 1018441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:40:17.975263 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.985459 1018441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.995362 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:18.008597 1018441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:40:18.021359 1018441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:40:18.030336 1018441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:18.238441 1018441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:40:18.496623 1018441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:40:18.496683 1018441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:40:18.503278 1018441 start.go:564] Will wait 60s for crictl version
	I1208 01:40:18.503341 1018441 ssh_runner.go:195] Run: which crictl
	I1208 01:40:18.509633 1018441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:40:18.557232 1018441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:40:18.557312 1018441 ssh_runner.go:195] Run: crio --version
	I1208 01:40:18.592780 1018441 ssh_runner.go:195] Run: crio --version
	I1208 01:40:18.634408 1018441 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:40:18.637403 1018441 cli_runner.go:164] Run: docker network inspect cert-expiration-428091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:18.655734 1018441 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:40:18.660404 1018441 kubeadm.go:884] updating cluster {Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:40:18.660525 1018441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:18.660578 1018441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:18.694781 1018441 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:18.694793 1018441 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:40:18.694884 1018441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:18.726006 1018441 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:18.726017 1018441 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:40:18.726025 1018441 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:40:18.726363 1018441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-428091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:40:18.726461 1018441 ssh_runner.go:195] Run: crio config
	I1208 01:40:18.815523 1018441 cni.go:84] Creating CNI manager for ""
	I1208 01:40:18.815535 1018441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:18.815552 1018441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:40:18.815574 1018441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-428091 NodeName:cert-expiration-428091 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:40:18.815708 1018441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-428091"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:40:18.815787 1018441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:40:18.826070 1018441 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:40:18.826134 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:40:18.835846 1018441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1208 01:40:18.849854 1018441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:40:18.863198 1018441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:40:18.876568 1018441 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:40:18.881039 1018441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:19.025554 1018441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:40:19.039683 1018441 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091 for IP: 192.168.85.2
	I1208 01:40:19.039694 1018441 certs.go:195] generating shared ca certs ...
	I1208 01:40:19.039708 1018441 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.039850 1018441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:40:19.039888 1018441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:40:19.039893 1018441 certs.go:257] generating profile certs ...
	W1208 01:40:19.040021 1018441 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1208 01:40:19.040057 1018441 certs.go:629] cert expired /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt: expiration: 2025-12-08 01:39:45 +0000 UTC, now: 2025-12-08 01:40:19.040040722 +0000 UTC m=+9.372682250
	I1208 01:40:19.040169 1018441 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key
	I1208 01:40:19.040185 1018441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt with IP's: []
	I1208 01:40:19.403826 1018441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt ...
	I1208 01:40:19.403848 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt: {Name:mk37cad1bc57037e279c363463249932239b3d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.404024 1018441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key ...
	I1208 01:40:19.404596 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key: {Name:mk6db4f3019d110c5a1550104fd85393a313e603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1208 01:40:19.404890 1018441 out.go:285] ! Certificate apiserver.crt.71eab2ca has expired. Generating a new one...
	I1208 01:40:19.406913 1018441 certs.go:629] cert expired /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca: expiration: 2025-12-08 01:39:45 +0000 UTC, now: 2025-12-08 01:40:19.406900239 +0000 UTC m=+9.739541800
	I1208 01:40:19.407262 1018441 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key.71eab2ca
	I1208 01:40:19.407280 1018441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	
	
	==> CRI-O <==
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.832996843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.840139523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.840858883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.856210166Z" level=info msg="Created container 44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper" id=5f0c77a5-366b-4c02-9870-f986adb9ffe8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.859046006Z" level=info msg="Starting container: 44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790" id=0795e3b7-4c60-4df8-b0db-b1a4e00676c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.861846859Z" level=info msg="Started container" PID=1641 containerID=44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper id=0795e3b7-4c60-4df8-b0db-b1a4e00676c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a9806a3f0954ce2796fbdf9ec08b5ce78639a08332fbd520ffe23e3f874c18a
	Dec 08 01:40:01 old-k8s-version-661561 conmon[1639]: conmon 44ff7c5ba10876a46c2a <ninfo>: container 1641 exited with status 1
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.377220417Z" level=info msg="Removing container: d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.387079953Z" level=info msg="Error loading conmon cgroup of container d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9: cgroup deleted" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.392555019Z" level=info msg="Removed container d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.961059993Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967371172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967544228Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967626485Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.97219133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.972252828Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.972274137Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978665899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978699196Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978722154Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.98485002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.98488714Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.984903657Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.991229302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.99126753Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	44ff7c5ba1087       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   0a9806a3f0954       dashboard-metrics-scraper-5f989dc9cf-pc9v8       kubernetes-dashboard
	252c3512ae386       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   b9f5c5df2ed9a       storage-provisioner                              kube-system
	2919f0946ab1b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   b1f34a1cdb5e4       kubernetes-dashboard-8694d4445c-dxkn2            kubernetes-dashboard
	519208bc470e9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   a0f6958cda4ca       coredns-5dd5756b68-xxvjs                         kube-system
	a0c8be5b4b2bc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   3ae9c854c906c       busybox                                          default
	19897ecc4f1f9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   b9f5c5df2ed9a       storage-provisioner                              kube-system
	afc1a2d7ec80c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   09ba7db83676a       kindnet-9jp8g                                    kube-system
	e7bfc63787639       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   0fa35f1c91243       kube-proxy-c9bhh                                 kube-system
	db3477f42c8b0       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   11593b4611d55       kube-apiserver-old-k8s-version-661561            kube-system
	1e731418e7e9e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   764f2c22b4307       etcd-old-k8s-version-661561                      kube-system
	50b6126c143b7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   e94c45cc73ee9       kube-controller-manager-old-k8s-version-661561   kube-system
	73dc2c8233cf3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   d2117bc2a4734       kube-scheduler-old-k8s-version-661561            kube-system
	
	
	==> coredns [519208bc470e9706bec61b9b5ac6968add358d9d73f601b1c1404beed17d739a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40908 - 50205 "HINFO IN 5316456193879195217.5395911338999867647. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013793654s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-661561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-661561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-661561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_38_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:38:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-661561
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:40:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-661561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                bbb3bea3-db6a-4a1e-9c0a-2e379053e90a
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-xxvjs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-661561                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-9jp8g                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-661561             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-661561    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-c9bhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-661561             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pc9v8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dxkn2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-661561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-661561 event: Registered Node old-k8s-version-661561 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-661561 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-661561 event: Registered Node old-k8s-version-661561 in Controller
	
	
	==> dmesg <==
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1e731418e7e9eb3ef33b29a3786cac63eb6d34337f3b85e70054f49effd66264] <==
	{"level":"info","ts":"2025-12-08T01:39:23.979675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-08T01:39:23.97972Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-08T01:39:23.979984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-08T01:39:23.980073Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-08T01:39:23.9802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:39:23.980254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:39:24.005394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-08T01:39:24.00553Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:39:24.006501Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:39:24.00855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-08T01:39:24.008663Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-08T01:39:25.733059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.73317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.733227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.733264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.743013Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-661561 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-08T01:39:25.743208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:39:25.744199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-08T01:39:25.744311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:39:25.749516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-08T01:39:25.749923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-08T01:39:25.749954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:40:22 up  6:22,  0 user,  load average: 3.02, 2.58, 2.14
	Linux old-k8s-version-661561 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afc1a2d7ec80cd10fc94e723f0fa72658620a15e33de2ef0c6e7b721ae07d99b] <==
	I1208 01:39:29.740400       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:39:29.740803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1208 01:39:29.741022       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:39:29.741065       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:39:29.741125       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:39:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:39:29.958044       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:39:29.958066       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:39:29.958075       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:39:29.958341       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:39:59.961302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:39:59.961307       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:39:59.962600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:39:59.962708       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:40:01.658421       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:40:01.658455       1 metrics.go:72] Registering metrics
	I1208 01:40:01.658582       1 controller.go:711] "Syncing nftables rules"
	I1208 01:40:09.960644       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:40:09.960750       1 main.go:301] handling current node
	I1208 01:40:19.974926       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:40:19.974983       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db3477f42c8b050631a028c9c177b4b3e9855d1200a8f4514f8d127b54fbcb3b] <==
	I1208 01:39:28.766551       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1208 01:39:29.060321       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1208 01:39:29.061108       1 shared_informer.go:318] Caches are synced for configmaps
	I1208 01:39:29.061182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:39:29.067201       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1208 01:39:29.067488       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1208 01:39:29.067821       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1208 01:39:29.067964       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1208 01:39:29.068591       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:39:29.068616       1 aggregator.go:166] initial CRD sync complete...
	I1208 01:39:29.068688       1 autoregister_controller.go:141] Starting autoregister controller
	I1208 01:39:29.068715       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:39:29.068742       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:39:29.096899       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1208 01:39:29.655676       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:39:30.946732       1 controller.go:624] quota admission added evaluator for: namespaces
	I1208 01:39:31.026249       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1208 01:39:31.059250       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:39:31.070270       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:39:31.081517       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1208 01:39:31.147499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.238.142"}
	I1208 01:39:31.170116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.53.152"}
	I1208 01:39:41.414570       1 controller.go:624] quota admission added evaluator for: endpoints
	I1208 01:39:41.446039       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1208 01:39:41.581414       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [50b6126c143b75351adf2c3d4c08de132d5ab508f2efcfa73eecbbab003f4842] <==
	I1208 01:39:41.522226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.89462ms"
	I1208 01:39:41.535234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.161165ms"
	I1208 01:39:41.544284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.992289ms"
	I1208 01:39:41.544694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.998µs"
	I1208 01:39:41.548935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.648947ms"
	I1208 01:39:41.549173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="119.042µs"
	I1208 01:39:41.553775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.884µs"
	I1208 01:39:41.565832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.59µs"
	I1208 01:39:41.565922       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1208 01:39:41.571986       1 shared_informer.go:318] Caches are synced for daemon sets
	I1208 01:39:41.617794       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 01:39:41.623937       1 shared_informer.go:318] Caches are synced for stateful set
	I1208 01:39:41.631226       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 01:39:41.978362       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:39:42.005884       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:39:42.005954       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1208 01:39:47.340023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.326392ms"
	I1208 01:39:47.340103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.514µs"
	I1208 01:39:51.339693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.14µs"
	I1208 01:39:52.345989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.42µs"
	I1208 01:39:53.342626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.796µs"
	I1208 01:40:02.403251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.464µs"
	I1208 01:40:04.745836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.450995ms"
	I1208 01:40:04.747580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.103µs"
	I1208 01:40:11.846479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.988µs"
	
	
	==> kube-proxy [e7bfc63787639175c63bb390408cb799223ab69316a20f1ef610c444265dae43] <==
	I1208 01:39:30.220834       1 server_others.go:69] "Using iptables proxy"
	I1208 01:39:30.263893       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1208 01:39:30.304229       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:39:30.306198       1 server_others.go:152] "Using iptables Proxier"
	I1208 01:39:30.306297       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 01:39:30.306351       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 01:39:30.306415       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 01:39:30.306677       1 server.go:846] "Version info" version="v1.28.0"
	I1208 01:39:30.307119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:39:30.307911       1 config.go:188] "Starting service config controller"
	I1208 01:39:30.307971       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 01:39:30.308042       1 config.go:97] "Starting endpoint slice config controller"
	I1208 01:39:30.308081       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 01:39:30.308665       1 config.go:315] "Starting node config controller"
	I1208 01:39:30.308725       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 01:39:30.409698       1 shared_informer.go:318] Caches are synced for node config
	I1208 01:39:30.409728       1 shared_informer.go:318] Caches are synced for service config
	I1208 01:39:30.409765       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73dc2c8233cf3b38e74119af9ff7ac7f41e9b14c4ebe75ddf6ba7def29f90d74] <==
	I1208 01:39:26.403009       1 serving.go:348] Generated self-signed cert in-memory
	I1208 01:39:29.175531       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1208 01:39:29.175658       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:39:29.194467       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1208 01:39:29.198821       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1208 01:39:29.198880       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1208 01:39:29.212285       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1208 01:39:29.198898       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:39:29.212576       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 01:39:29.198906       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:39:29.214636       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1208 01:39:29.314128       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 01:39:29.314270       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1208 01:39:29.315332       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 08 01:39:29 old-k8s-version-661561 kubelet[783]: W1208 01:39:29.579824     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/crio-a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d WatchSource:0}: Error finding container a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d: Status 404 returned error can't find the container with id a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d
	Dec 08 01:39:34 old-k8s-version-661561 kubelet[783]: I1208 01:39:34.711789     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.521707     783 topology_manager.go:215] "Topology Admit Handler" podUID="def4e9ad-e2af-40d8-8910-1ba40eff5ffd" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.523245     783 topology_manager.go:215] "Topology Admit Handler" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689386     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3bd96566-58d8-476c-95e2-7ba8049d42e1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pc9v8\" (UID: \"3bd96566-58d8-476c-95e2-7ba8049d42e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689604     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmk9x\" (UniqueName: \"kubernetes.io/projected/3bd96566-58d8-476c-95e2-7ba8049d42e1-kube-api-access-gmk9x\") pod \"dashboard-metrics-scraper-5f989dc9cf-pc9v8\" (UID: \"3bd96566-58d8-476c-95e2-7ba8049d42e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689650     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/def4e9ad-e2af-40d8-8910-1ba40eff5ffd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dxkn2\" (UID: \"def4e9ad-e2af-40d8-8910-1ba40eff5ffd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689681     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5f4\" (UniqueName: \"kubernetes.io/projected/def4e9ad-e2af-40d8-8910-1ba40eff5ffd-kube-api-access-zz5f4\") pod \"kubernetes-dashboard-8694d4445c-dxkn2\" (UID: \"def4e9ad-e2af-40d8-8910-1ba40eff5ffd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:51 old-k8s-version-661561 kubelet[783]: I1208 01:39:51.319681     783 scope.go:117] "RemoveContainer" containerID="8a226b8ed078eca5d83b6d1ce21b2b0c9aa7e20d184a6110f78735da78e8f25f"
	Dec 08 01:39:51 old-k8s-version-661561 kubelet[783]: I1208 01:39:51.340103     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2" podStartSLOduration=5.635429551 podCreationTimestamp="2025-12-08 01:39:41 +0000 UTC" firstStartedPulling="2025-12-08 01:39:41.866157225 +0000 UTC m=+18.878899436" lastFinishedPulling="2025-12-08 01:39:46.569073408 +0000 UTC m=+23.581815619" observedRunningTime="2025-12-08 01:39:47.322793571 +0000 UTC m=+24.335535790" watchObservedRunningTime="2025-12-08 01:39:51.338345734 +0000 UTC m=+28.351087953"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: I1208 01:39:52.322647     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: E1208 01:39:52.324280     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: I1208 01:39:52.324447     783 scope.go:117] "RemoveContainer" containerID="8a226b8ed078eca5d83b6d1ce21b2b0c9aa7e20d184a6110f78735da78e8f25f"
	Dec 08 01:39:53 old-k8s-version-661561 kubelet[783]: I1208 01:39:53.327378     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:39:53 old-k8s-version-661561 kubelet[783]: E1208 01:39:53.327681     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:00 old-k8s-version-661561 kubelet[783]: I1208 01:40:00.366945     783 scope.go:117] "RemoveContainer" containerID="19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa"
	Dec 08 01:40:01 old-k8s-version-661561 kubelet[783]: I1208 01:40:01.829386     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: I1208 01:40:02.375881     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: I1208 01:40:02.376239     783 scope.go:117] "RemoveContainer" containerID="44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: E1208 01:40:02.376617     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:11 old-k8s-version-661561 kubelet[783]: I1208 01:40:11.829584     783 scope.go:117] "RemoveContainer" containerID="44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	Dec 08 01:40:11 old-k8s-version-661561 kubelet[783]: E1208 01:40:11.830370     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2919f0946ab1b27883a67e0a1d1f724f0c5c22dce6ff3b71fb09c7de4c9f2039] <==
	2025/12/08 01:39:46 Using namespace: kubernetes-dashboard
	2025/12/08 01:39:46 Using in-cluster config to connect to apiserver
	2025/12/08 01:39:46 Using secret token for csrf signing
	2025/12/08 01:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:39:46 Successful initial request to the apiserver, version: v1.28.0
	2025/12/08 01:39:46 Generating JWE encryption key
	2025/12/08 01:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:39:47 Initializing JWE encryption key from synchronized object
	2025/12/08 01:39:47 Creating in-cluster Sidecar client
	2025/12/08 01:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:39:47 Serving insecurely on HTTP port: 9090
	2025/12/08 01:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:39:46 Starting overwatch
	
	
	==> storage-provisioner [19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa] <==
	I1208 01:39:29.693383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:39:59.696064       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [252c3512ae3866f1479c8caddeae6aa2cc7b4ed75ae08c708c308767303721e6] <==
	I1208 01:40:00.572062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:40:00.607036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:40:00.607213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 01:40:18.013572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:40:18.014876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"986e9fb0-2e44-4a3d-b9f2-86404551ac54", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4 became leader
	I1208 01:40:18.031787       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4!
	I1208 01:40:18.132554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-661561 -n old-k8s-version-661561: exit status 2 (415.58709ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-661561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-661561
helpers_test.go:243: (dbg) docker inspect old-k8s-version-661561:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	        "Created": "2025-12-08T01:37:59.095493293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1016438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:39:16.334049128Z",
	            "FinishedAt": "2025-12-08T01:39:15.483975083Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hostname",
	        "HostsPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/hosts",
	        "LogPath": "/var/lib/docker/containers/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f-json.log",
	        "Name": "/old-k8s-version-661561",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-661561:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-661561",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f",
	                "LowerDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c88107c28277284dfcd201f3920d49ca0a89ffa71d1217e92859da85bb2534a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-661561",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-661561/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-661561",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-661561",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f84ccefd247cc9dc93998f7705a385407cbf5d00ae0386e3d727308e1cee879b",
	            "SandboxKey": "/var/run/docker/netns/f84ccefd247c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33778"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33780"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-661561": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:0e:32:1e:64:eb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f564ce91e5f1a7355aa0c3c6eaf3b409225f9ea728cbb26fa06f64c7acc7ac75",
	                    "EndpointID": "34cd7eab49e3b133961654f9c205a795e1f8c624fbd35c952cc28696aab491f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-661561",
	                        "bab08c504dac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561: exit status 2 (541.20104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-661561 logs -n 25: (1.910955455s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-000739 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo containerd config dump                                                                                                                                                                                                  │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo crio config                                                                                                                                                                                                             │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ delete  │ -p cilium-000739                                                                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:40:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:40:09.711961 1018441 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:40:09.712072 1018441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:09.712075 1018441 out.go:374] Setting ErrFile to fd 2...
	I1208 01:40:09.712079 1018441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:09.712335 1018441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:40:09.712732 1018441 out.go:368] Setting JSON to false
	I1208 01:40:09.713796 1018441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22942,"bootTime":1765135068,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:40:09.713923 1018441 start.go:143] virtualization:  
	I1208 01:40:09.717494 1018441 out.go:179] * [cert-expiration-428091] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:40:09.720505 1018441 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:40:09.720622 1018441 notify.go:221] Checking for updates...
	I1208 01:40:09.726153 1018441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:40:09.729046 1018441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:40:09.731866 1018441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:40:09.734573 1018441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:40:09.737479 1018441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:40:09.740683 1018441 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:09.741343 1018441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:40:09.781904 1018441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:40:09.782017 1018441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:09.846996 1018441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:40:09.83687235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:09.847096 1018441 docker.go:319] overlay module found
	I1208 01:40:09.850228 1018441 out.go:179] * Using the docker driver based on existing profile
	I1208 01:40:09.853172 1018441 start.go:309] selected driver: docker
	I1208 01:40:09.853183 1018441 start.go:927] validating driver "docker" against &{Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:09.853292 1018441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:40:09.854057 1018441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:09.921734 1018441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:40:09.912394799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:09.922080 1018441 cni.go:84] Creating CNI manager for ""
	I1208 01:40:09.922146 1018441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:09.922189 1018441 start.go:353] cluster config:
	{Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:09.925290 1018441 out.go:179] * Starting "cert-expiration-428091" primary control-plane node in "cert-expiration-428091" cluster
	I1208 01:40:09.928065 1018441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:40:09.931041 1018441 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:40:09.934083 1018441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:09.934126 1018441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:40:09.934144 1018441 cache.go:65] Caching tarball of preloaded images
	I1208 01:40:09.934152 1018441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:40:09.934230 1018441 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:40:09.934239 1018441 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:40:09.934353 1018441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/config.json ...
	I1208 01:40:09.954943 1018441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:40:09.954954 1018441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:40:09.954975 1018441 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:40:09.955045 1018441 start.go:360] acquireMachinesLock for cert-expiration-428091: {Name:mk3e9ac88d28b3bf834f648d1ec918bdd8ecc323 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:09.955113 1018441 start.go:364] duration metric: took 50.487µs to acquireMachinesLock for "cert-expiration-428091"
	I1208 01:40:09.955133 1018441 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:40:09.955137 1018441 fix.go:54] fixHost starting: 
	I1208 01:40:09.955471 1018441 cli_runner.go:164] Run: docker container inspect cert-expiration-428091 --format={{.State.Status}}
	I1208 01:40:09.984729 1018441 fix.go:112] recreateIfNeeded on cert-expiration-428091: state=Running err=<nil>
	W1208 01:40:09.984761 1018441 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:40:09.988069 1018441 out.go:252] * Updating the running docker "cert-expiration-428091" container ...
	I1208 01:40:09.988119 1018441 machine.go:94] provisionDockerMachine start ...
	I1208 01:40:09.988200 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.016308 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.016669 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.016677 1018441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:40:10.176007 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-428091
	
	I1208 01:40:10.176021 1018441 ubuntu.go:182] provisioning hostname "cert-expiration-428091"
	I1208 01:40:10.176091 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.193807 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.194105 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.194113 1018441 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-428091 && echo "cert-expiration-428091" | sudo tee /etc/hostname
	I1208 01:40:10.361304 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-428091
	
	I1208 01:40:10.361370 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.379043 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:10.379354 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:10.379368 1018441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-428091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-428091/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-428091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:40:10.539547 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:40:10.539561 1018441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:40:10.539582 1018441 ubuntu.go:190] setting up certificates
	I1208 01:40:10.539600 1018441 provision.go:84] configureAuth start
	I1208 01:40:10.539661 1018441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-428091
	I1208 01:40:10.558675 1018441 provision.go:143] copyHostCerts
	I1208 01:40:10.558795 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:40:10.558805 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:40:10.558926 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:40:10.559056 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:40:10.559061 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:40:10.559088 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:40:10.559145 1018441 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:40:10.559148 1018441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:40:10.559175 1018441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:40:10.559227 1018441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-428091 san=[127.0.0.1 192.168.85.2 cert-expiration-428091 localhost minikube]
	I1208 01:40:10.897078 1018441 provision.go:177] copyRemoteCerts
	I1208 01:40:10.897131 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:40:10.897176 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:10.914896 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:11.025371 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1208 01:40:11.051942 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:40:11.073586 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:40:11.093593 1018441 provision.go:87] duration metric: took 553.971087ms to configureAuth
	I1208 01:40:11.093612 1018441 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:40:11.093798 1018441 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:11.093893 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:11.113369 1018441 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:11.113676 1018441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1208 01:40:11.113687 1018441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:40:16.522427 1018441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:40:16.522441 1018441 machine.go:97] duration metric: took 6.534315351s to provisionDockerMachine
	I1208 01:40:16.522452 1018441 start.go:293] postStartSetup for "cert-expiration-428091" (driver="docker")
	I1208 01:40:16.522462 1018441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:40:16.522542 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:40:16.522606 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.542936 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.651164 1018441 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:40:16.655835 1018441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:40:16.655853 1018441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:40:16.655863 1018441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:40:16.655926 1018441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:40:16.656022 1018441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:40:16.656127 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:40:16.670888 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:16.689391 1018441 start.go:296] duration metric: took 166.925734ms for postStartSetup
	I1208 01:40:16.689463 1018441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:40:16.689525 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.707043 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.812254 1018441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:40:16.817908 1018441 fix.go:56] duration metric: took 6.862763351s for fixHost
	I1208 01:40:16.817940 1018441 start.go:83] releasing machines lock for "cert-expiration-428091", held for 6.862803573s
	I1208 01:40:16.818014 1018441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-428091
	I1208 01:40:16.837122 1018441 ssh_runner.go:195] Run: cat /version.json
	I1208 01:40:16.837173 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.837190 1018441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:40:16.837295 1018441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-428091
	I1208 01:40:16.864614 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:16.872956 1018441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/cert-expiration-428091/id_rsa Username:docker}
	I1208 01:40:17.069320 1018441 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:17.076189 1018441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:40:17.135070 1018441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:40:17.146374 1018441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:40:17.146435 1018441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:40:17.155718 1018441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:40:17.155731 1018441 start.go:496] detecting cgroup driver to use...
	I1208 01:40:17.155762 1018441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:40:17.155821 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:40:17.172158 1018441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:40:17.185722 1018441 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:40:17.185775 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:40:17.202176 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:40:17.215610 1018441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:40:17.357760 1018441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:40:17.511682 1018441 docker.go:234] disabling docker service ...
	I1208 01:40:17.511741 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:40:17.526989 1018441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:40:17.540503 1018441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:40:17.698777 1018441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:40:17.890913 1018441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:40:17.909056 1018441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:40:17.925711 1018441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:40:17.925771 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.934837 1018441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:40:17.934914 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.945124 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.954924 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.965081 1018441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:40:17.975263 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.985459 1018441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:17.995362 1018441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:18.008597 1018441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:40:18.021359 1018441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:40:18.030336 1018441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:18.238441 1018441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:40:18.496623 1018441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:40:18.496683 1018441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:40:18.503278 1018441 start.go:564] Will wait 60s for crictl version
	I1208 01:40:18.503341 1018441 ssh_runner.go:195] Run: which crictl
	I1208 01:40:18.509633 1018441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:40:18.557232 1018441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:40:18.557312 1018441 ssh_runner.go:195] Run: crio --version
	I1208 01:40:18.592780 1018441 ssh_runner.go:195] Run: crio --version
	I1208 01:40:18.634408 1018441 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:40:18.637403 1018441 cli_runner.go:164] Run: docker network inspect cert-expiration-428091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:18.655734 1018441 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:40:18.660404 1018441 kubeadm.go:884] updating cluster {Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:40:18.660525 1018441 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:18.660578 1018441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:18.694781 1018441 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:18.694793 1018441 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:40:18.694884 1018441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:18.726006 1018441 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:18.726017 1018441 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:40:18.726025 1018441 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:40:18.726363 1018441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-428091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:40:18.726461 1018441 ssh_runner.go:195] Run: crio config
	I1208 01:40:18.815523 1018441 cni.go:84] Creating CNI manager for ""
	I1208 01:40:18.815535 1018441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:18.815552 1018441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:40:18.815574 1018441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-428091 NodeName:cert-expiration-428091 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:40:18.815708 1018441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-428091"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:40:18.815787 1018441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:40:18.826070 1018441 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:40:18.826134 1018441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:40:18.835846 1018441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1208 01:40:18.849854 1018441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:40:18.863198 1018441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:40:18.876568 1018441 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:40:18.881039 1018441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:19.025554 1018441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:40:19.039683 1018441 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091 for IP: 192.168.85.2
	I1208 01:40:19.039694 1018441 certs.go:195] generating shared ca certs ...
	I1208 01:40:19.039708 1018441 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.039850 1018441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:40:19.039888 1018441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:40:19.039893 1018441 certs.go:257] generating profile certs ...
	W1208 01:40:19.040021 1018441 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1208 01:40:19.040057 1018441 certs.go:629] cert expired /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt: expiration: 2025-12-08 01:39:45 +0000 UTC, now: 2025-12-08 01:40:19.040040722 +0000 UTC m=+9.372682250
	I1208 01:40:19.040169 1018441 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key
	I1208 01:40:19.040185 1018441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt with IP's: []
	I1208 01:40:19.403826 1018441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt ...
	I1208 01:40:19.403848 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.crt: {Name:mk37cad1bc57037e279c363463249932239b3d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.404024 1018441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key ...
	I1208 01:40:19.404596 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/client.key: {Name:mk6db4f3019d110c5a1550104fd85393a313e603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1208 01:40:19.404890 1018441 out.go:285] ! Certificate apiserver.crt.71eab2ca has expired. Generating a new one...
	I1208 01:40:19.406913 1018441 certs.go:629] cert expired /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca: expiration: 2025-12-08 01:39:45 +0000 UTC, now: 2025-12-08 01:40:19.406900239 +0000 UTC m=+9.739541800
	I1208 01:40:19.407262 1018441 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key.71eab2ca
	I1208 01:40:19.407280 1018441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:40:19.741746 1018441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca ...
	I1208 01:40:19.741765 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca: {Name:mka2082797c5dde13ae7d4bbaf0a977204ce6868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.741930 1018441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key.71eab2ca ...
	I1208 01:40:19.741939 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key.71eab2ca: {Name:mk6cf5c24e8400415c917c44420e86d43a25f302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:19.742006 1018441 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt.71eab2ca -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt
	I1208 01:40:19.742162 1018441 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key.71eab2ca -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key
	W1208 01:40:19.742373 1018441 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1208 01:40:19.742447 1018441 certs.go:629] cert expired /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.crt: expiration: 2025-12-08 01:39:46 +0000 UTC, now: 2025-12-08 01:40:19.742440814 +0000 UTC m=+10.075082350
	I1208 01:40:19.742544 1018441 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.key
	I1208 01:40:19.742573 1018441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.crt with IP's: []
	I1208 01:40:20.210038 1018441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.crt ...
	I1208 01:40:20.210057 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.crt: {Name:mk3b97062aa3b4984aee45b1480309211dd8f71a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:20.210222 1018441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.key ...
	I1208 01:40:20.210231 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.key: {Name:mkbe81fdf895fec964d3b95058e17bd2c84cf6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:20.210428 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:40:20.210468 1018441 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:40:20.210476 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:40:20.213214 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:40:20.213293 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:40:20.213321 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:40:20.213392 1018441 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:20.214038 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:40:20.261973 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:40:20.332414 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:40:20.392135 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:40:20.438247 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1208 01:40:20.478360 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:40:20.536948 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:40:20.600096 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/cert-expiration-428091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:40:20.661700 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:40:20.731969 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:40:20.795454 1018441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:40:20.865109 1018441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:40:20.927108 1018441 ssh_runner.go:195] Run: openssl version
	I1208 01:40:20.961818 1018441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:40:20.984944 1018441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:40:21.000254 1018441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:40:21.010296 1018441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:40:21.010360 1018441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:40:21.115200 1018441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:21.147202 1018441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:21.164399 1018441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:40:21.178394 1018441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:21.188057 1018441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:21.188118 1018441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:21.278478 1018441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:40:21.292021 1018441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:40:21.305656 1018441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:40:21.327205 1018441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:40:21.333021 1018441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:40:21.333095 1018441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:40:21.428381 1018441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:40:21.445330 1018441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:40:21.456561 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:40:21.533653 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:40:21.608988 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:40:21.685810 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:40:21.774787 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:40:21.862508 1018441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:40:21.940110 1018441 kubeadm.go:401] StartCluster: {Name:cert-expiration-428091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-428091 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:21.940192 1018441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:40:21.940257 1018441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:40:22.016108 1018441 cri.go:89] found id: "aef7adc94e49408b62d09d329feca8c14b86cc094cb326398d10a151b0e26bb5"
	I1208 01:40:22.016121 1018441 cri.go:89] found id: "d064e5880ba4ff49e75d5addd039ccc77a1d3766049e271b010865cc6ddd400c"
	I1208 01:40:22.016124 1018441 cri.go:89] found id: "c8db83941d1906c832d026cd8b8838f550cf8629499773348212a3d47a5425cb"
	I1208 01:40:22.016127 1018441 cri.go:89] found id: "a4324f18eb2a5618f4227d9d07a6467f0b51045a379ddfeb01df13302fdc56cd"
	I1208 01:40:22.016129 1018441 cri.go:89] found id: "a41a9951801d22afbf2833caa8dd002d34465b7ba628556a6660d59e0c9a2a89"
	I1208 01:40:22.016133 1018441 cri.go:89] found id: "8e711db6c62409eac86e4e54cfa53ee4780e57fbbec8d21f6a147ba99b865135"
	I1208 01:40:22.016135 1018441 cri.go:89] found id: "f6f7d4c23ed40debee3698401681c17101f204a73c58515071ddeaa0c187dd92"
	I1208 01:40:22.016137 1018441 cri.go:89] found id: "a4980c05616331378257713fa577fcd35caca0c90e22b336836774e7dbb13226"
	I1208 01:40:22.016140 1018441 cri.go:89] found id: "e8714a3c3d927586992282f86f33f70dde2d0cec60957331c7432ca5da4376a5"
	I1208 01:40:22.016159 1018441 cri.go:89] found id: "6cd2f591d7618eb29d64cc079ff1e6f507317f3e15710f32a159db362b389520"
	I1208 01:40:22.016162 1018441 cri.go:89] found id: "9f80a5bbb768bd4505a05809b1c7023d340a421a33dda2d2ddb0ee8ad0f68801"
	I1208 01:40:22.016164 1018441 cri.go:89] found id: "b6718ef207afd71a31ac5d1176e224b4d7da5408dfdb16ab6c99f78bcb2565a2"
	I1208 01:40:22.016166 1018441 cri.go:89] found id: "e18abf39e734a0449941dcf4af242281e8add12e7273d871b0d522b910b1d94d"
	I1208 01:40:22.016168 1018441 cri.go:89] found id: "9482eed628055419951bd71178bccf0ea3292ef4b8d21d2650783dfc608eb31c"
	I1208 01:40:22.016179 1018441 cri.go:89] found id: "6a2f75384e6b19d34147419105a5b434ec2ea0367c561c3e17b509f61b1f0986"
	I1208 01:40:22.016185 1018441 cri.go:89] found id: "0d04db662ebbd899090d85a2d1cac5d9276a844c2efdfec6450289b5861ebbb2"
	I1208 01:40:22.016188 1018441 cri.go:89] found id: ""
	I1208 01:40:22.016239 1018441 ssh_runner.go:195] Run: sudo runc list -f json
	W1208 01:40:22.044611 1018441 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:40:22Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:40:22.044708 1018441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:40:22.058352 1018441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:40:22.058362 1018441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:40:22.058414 1018441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:40:22.066469 1018441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:40:22.067235 1018441 kubeconfig.go:125] found "cert-expiration-428091" server: "https://192.168.85.2:8443"
	I1208 01:40:22.068796 1018441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:40:22.096472 1018441 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:40:22.096498 1018441 kubeadm.go:602] duration metric: took 38.131557ms to restartPrimaryControlPlane
	I1208 01:40:22.096507 1018441 kubeadm.go:403] duration metric: took 156.409401ms to StartCluster
	I1208 01:40:22.096521 1018441 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:22.096587 1018441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:40:22.097513 1018441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:22.097741 1018441 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:40:22.098001 1018441 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:22.098032 1018441 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:40:22.098088 1018441 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-428091"
	I1208 01:40:22.098101 1018441 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-428091"
	W1208 01:40:22.098106 1018441 addons.go:248] addon storage-provisioner should already be in state true
	I1208 01:40:22.098139 1018441 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-428091"
	I1208 01:40:22.098149 1018441 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-428091"
	I1208 01:40:22.098424 1018441 cli_runner.go:164] Run: docker container inspect cert-expiration-428091 --format={{.State.Status}}
	I1208 01:40:22.098696 1018441 host.go:66] Checking if "cert-expiration-428091" exists ...
	I1208 01:40:22.099269 1018441 cli_runner.go:164] Run: docker container inspect cert-expiration-428091 --format={{.State.Status}}
	I1208 01:40:22.101158 1018441 out.go:179] * Verifying Kubernetes components...
	I1208 01:40:22.105069 1018441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:22.137352 1018441 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-428091"
	W1208 01:40:22.137363 1018441 addons.go:248] addon default-storageclass should already be in state true
	I1208 01:40:22.137386 1018441 host.go:66] Checking if "cert-expiration-428091" exists ...
	I1208 01:40:22.137799 1018441 cli_runner.go:164] Run: docker container inspect cert-expiration-428091 --format={{.State.Status}}
	I1208 01:40:22.151422 1018441 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.832996843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.840139523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.840858883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.856210166Z" level=info msg="Created container 44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper" id=5f0c77a5-366b-4c02-9870-f986adb9ffe8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.859046006Z" level=info msg="Starting container: 44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790" id=0795e3b7-4c60-4df8-b0db-b1a4e00676c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:40:01 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:01.861846859Z" level=info msg="Started container" PID=1641 containerID=44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper id=0795e3b7-4c60-4df8-b0db-b1a4e00676c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a9806a3f0954ce2796fbdf9ec08b5ce78639a08332fbd520ffe23e3f874c18a
	Dec 08 01:40:01 old-k8s-version-661561 conmon[1639]: conmon 44ff7c5ba10876a46c2a <ninfo>: container 1641 exited with status 1
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.377220417Z" level=info msg="Removing container: d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.387079953Z" level=info msg="Error loading conmon cgroup of container d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9: cgroup deleted" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:02 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:02.392555019Z" level=info msg="Removed container d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8/dashboard-metrics-scraper" id=ba6de755-2845-4641-92a6-0ffc5d35417c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.961059993Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967371172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967544228Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.967626485Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.97219133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.972252828Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.972274137Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978665899Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978699196Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.978722154Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.98485002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.98488714Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.984903657Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.991229302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:40:09 old-k8s-version-661561 crio[652]: time="2025-12-08T01:40:09.99126753Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	44ff7c5ba1087       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   0a9806a3f0954       dashboard-metrics-scraper-5f989dc9cf-pc9v8       kubernetes-dashboard
	252c3512ae386       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   b9f5c5df2ed9a       storage-provisioner                              kube-system
	2919f0946ab1b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   b1f34a1cdb5e4       kubernetes-dashboard-8694d4445c-dxkn2            kubernetes-dashboard
	519208bc470e9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   a0f6958cda4ca       coredns-5dd5756b68-xxvjs                         kube-system
	a0c8be5b4b2bc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   3ae9c854c906c       busybox                                          default
	19897ecc4f1f9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   b9f5c5df2ed9a       storage-provisioner                              kube-system
	afc1a2d7ec80c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   09ba7db83676a       kindnet-9jp8g                                    kube-system
	e7bfc63787639       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   0fa35f1c91243       kube-proxy-c9bhh                                 kube-system
	db3477f42c8b0       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   11593b4611d55       kube-apiserver-old-k8s-version-661561            kube-system
	1e731418e7e9e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   764f2c22b4307       etcd-old-k8s-version-661561                      kube-system
	50b6126c143b7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   e94c45cc73ee9       kube-controller-manager-old-k8s-version-661561   kube-system
	73dc2c8233cf3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   d2117bc2a4734       kube-scheduler-old-k8s-version-661561            kube-system
	
	
	==> coredns [519208bc470e9706bec61b9b5ac6968add358d9d73f601b1c1404beed17d739a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40908 - 50205 "HINFO IN 5316456193879195217.5395911338999867647. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013793654s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-661561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-661561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-661561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_38_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:38:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-661561
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:40:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:39:59 +0000   Mon, 08 Dec 2025 01:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-661561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                bbb3bea3-db6a-4a1e-9c0a-2e379053e90a
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-xxvjs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-661561                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-9jp8g                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-661561             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-661561    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-c9bhh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-661561             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-pc9v8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dxkn2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-661561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-661561 event: Registered Node old-k8s-version-661561 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-661561 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-661561 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-661561 event: Registered Node old-k8s-version-661561 in Controller
	
	
	==> dmesg <==
	[  +3.058839] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:04] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1e731418e7e9eb3ef33b29a3786cac63eb6d34337f3b85e70054f49effd66264] <==
	{"level":"info","ts":"2025-12-08T01:39:23.979675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-08T01:39:23.97972Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-08T01:39:23.979984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-08T01:39:23.980073Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-08T01:39:23.9802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:39:23.980254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-08T01:39:24.005394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-08T01:39:24.00553Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:39:24.006501Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-08T01:39:24.00855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-08T01:39:24.008663Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-08T01:39:25.733059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.73317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.733227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-08T01:39:25.733264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.733354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-08T01:39:25.743013Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-661561 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-08T01:39:25.743208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:39:25.744199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-08T01:39:25.744311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-08T01:39:25.749516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-08T01:39:25.749923Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-08T01:39:25.749954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:40:25 up  6:22,  0 user,  load average: 3.02, 2.58, 2.14
	Linux old-k8s-version-661561 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afc1a2d7ec80cd10fc94e723f0fa72658620a15e33de2ef0c6e7b721ae07d99b] <==
	I1208 01:39:29.740400       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:39:29.740803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1208 01:39:29.741022       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:39:29.741065       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:39:29.741125       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:39:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:39:29.958044       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:39:29.958066       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:39:29.958075       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:39:29.958341       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:39:59.961302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:39:59.961307       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:39:59.962600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:39:59.962708       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:40:01.658421       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:40:01.658455       1 metrics.go:72] Registering metrics
	I1208 01:40:01.658582       1 controller.go:711] "Syncing nftables rules"
	I1208 01:40:09.960644       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:40:09.960750       1 main.go:301] handling current node
	I1208 01:40:19.974926       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1208 01:40:19.974983       1 main.go:301] handling current node
	
	
	==> kube-apiserver [db3477f42c8b050631a028c9c177b4b3e9855d1200a8f4514f8d127b54fbcb3b] <==
	I1208 01:39:28.766551       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1208 01:39:29.060321       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1208 01:39:29.061108       1 shared_informer.go:318] Caches are synced for configmaps
	I1208 01:39:29.061182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:39:29.067201       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1208 01:39:29.067488       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1208 01:39:29.067821       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1208 01:39:29.067964       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1208 01:39:29.068591       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:39:29.068616       1 aggregator.go:166] initial CRD sync complete...
	I1208 01:39:29.068688       1 autoregister_controller.go:141] Starting autoregister controller
	I1208 01:39:29.068715       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:39:29.068742       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:39:29.096899       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1208 01:39:29.655676       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:39:30.946732       1 controller.go:624] quota admission added evaluator for: namespaces
	I1208 01:39:31.026249       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1208 01:39:31.059250       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:39:31.070270       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:39:31.081517       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1208 01:39:31.147499       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.238.142"}
	I1208 01:39:31.170116       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.53.152"}
	I1208 01:39:41.414570       1 controller.go:624] quota admission added evaluator for: endpoints
	I1208 01:39:41.446039       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1208 01:39:41.581414       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [50b6126c143b75351adf2c3d4c08de132d5ab508f2efcfa73eecbbab003f4842] <==
	I1208 01:39:41.522226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.89462ms"
	I1208 01:39:41.535234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.161165ms"
	I1208 01:39:41.544284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.992289ms"
	I1208 01:39:41.544694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="92.998µs"
	I1208 01:39:41.548935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.648947ms"
	I1208 01:39:41.549173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="119.042µs"
	I1208 01:39:41.553775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.884µs"
	I1208 01:39:41.565832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.59µs"
	I1208 01:39:41.565922       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1208 01:39:41.571986       1 shared_informer.go:318] Caches are synced for daemon sets
	I1208 01:39:41.617794       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 01:39:41.623937       1 shared_informer.go:318] Caches are synced for stateful set
	I1208 01:39:41.631226       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 01:39:41.978362       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:39:42.005884       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 01:39:42.005954       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1208 01:39:47.340023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.326392ms"
	I1208 01:39:47.340103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.514µs"
	I1208 01:39:51.339693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.14µs"
	I1208 01:39:52.345989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.42µs"
	I1208 01:39:53.342626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.796µs"
	I1208 01:40:02.403251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.464µs"
	I1208 01:40:04.745836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.450995ms"
	I1208 01:40:04.747580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.103µs"
	I1208 01:40:11.846479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.988µs"
	
	
	==> kube-proxy [e7bfc63787639175c63bb390408cb799223ab69316a20f1ef610c444265dae43] <==
	I1208 01:39:30.220834       1 server_others.go:69] "Using iptables proxy"
	I1208 01:39:30.263893       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1208 01:39:30.304229       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:39:30.306198       1 server_others.go:152] "Using iptables Proxier"
	I1208 01:39:30.306297       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 01:39:30.306351       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 01:39:30.306415       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 01:39:30.306677       1 server.go:846] "Version info" version="v1.28.0"
	I1208 01:39:30.307119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:39:30.307911       1 config.go:188] "Starting service config controller"
	I1208 01:39:30.307971       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 01:39:30.308042       1 config.go:97] "Starting endpoint slice config controller"
	I1208 01:39:30.308081       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 01:39:30.308665       1 config.go:315] "Starting node config controller"
	I1208 01:39:30.308725       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 01:39:30.409698       1 shared_informer.go:318] Caches are synced for node config
	I1208 01:39:30.409728       1 shared_informer.go:318] Caches are synced for service config
	I1208 01:39:30.409765       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73dc2c8233cf3b38e74119af9ff7ac7f41e9b14c4ebe75ddf6ba7def29f90d74] <==
	I1208 01:39:26.403009       1 serving.go:348] Generated self-signed cert in-memory
	I1208 01:39:29.175531       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1208 01:39:29.175658       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:39:29.194467       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1208 01:39:29.198821       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1208 01:39:29.198880       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1208 01:39:29.212285       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1208 01:39:29.198898       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:39:29.212576       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 01:39:29.198906       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:39:29.214636       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1208 01:39:29.314128       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 01:39:29.314270       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1208 01:39:29.315332       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 08 01:39:29 old-k8s-version-661561 kubelet[783]: W1208 01:39:29.579824     783 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bab08c504dac3471f066ea990542bd1d8d15c8597a8c30e1f2765253e2acc43f/crio-a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d WatchSource:0}: Error finding container a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d: Status 404 returned error can't find the container with id a0f6958cda4ca2771e1ae397e16c374b611fbc30857ff43c8a6981ea3293594d
	Dec 08 01:39:34 old-k8s-version-661561 kubelet[783]: I1208 01:39:34.711789     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.521707     783 topology_manager.go:215] "Topology Admit Handler" podUID="def4e9ad-e2af-40d8-8910-1ba40eff5ffd" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.523245     783 topology_manager.go:215] "Topology Admit Handler" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689386     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3bd96566-58d8-476c-95e2-7ba8049d42e1-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-pc9v8\" (UID: \"3bd96566-58d8-476c-95e2-7ba8049d42e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689604     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmk9x\" (UniqueName: \"kubernetes.io/projected/3bd96566-58d8-476c-95e2-7ba8049d42e1-kube-api-access-gmk9x\") pod \"dashboard-metrics-scraper-5f989dc9cf-pc9v8\" (UID: \"3bd96566-58d8-476c-95e2-7ba8049d42e1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689650     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/def4e9ad-e2af-40d8-8910-1ba40eff5ffd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dxkn2\" (UID: \"def4e9ad-e2af-40d8-8910-1ba40eff5ffd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:41 old-k8s-version-661561 kubelet[783]: I1208 01:39:41.689681     783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5f4\" (UniqueName: \"kubernetes.io/projected/def4e9ad-e2af-40d8-8910-1ba40eff5ffd-kube-api-access-zz5f4\") pod \"kubernetes-dashboard-8694d4445c-dxkn2\" (UID: \"def4e9ad-e2af-40d8-8910-1ba40eff5ffd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2"
	Dec 08 01:39:51 old-k8s-version-661561 kubelet[783]: I1208 01:39:51.319681     783 scope.go:117] "RemoveContainer" containerID="8a226b8ed078eca5d83b6d1ce21b2b0c9aa7e20d184a6110f78735da78e8f25f"
	Dec 08 01:39:51 old-k8s-version-661561 kubelet[783]: I1208 01:39:51.340103     783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dxkn2" podStartSLOduration=5.635429551 podCreationTimestamp="2025-12-08 01:39:41 +0000 UTC" firstStartedPulling="2025-12-08 01:39:41.866157225 +0000 UTC m=+18.878899436" lastFinishedPulling="2025-12-08 01:39:46.569073408 +0000 UTC m=+23.581815619" observedRunningTime="2025-12-08 01:39:47.322793571 +0000 UTC m=+24.335535790" watchObservedRunningTime="2025-12-08 01:39:51.338345734 +0000 UTC m=+28.351087953"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: I1208 01:39:52.322647     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: E1208 01:39:52.324280     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:39:52 old-k8s-version-661561 kubelet[783]: I1208 01:39:52.324447     783 scope.go:117] "RemoveContainer" containerID="8a226b8ed078eca5d83b6d1ce21b2b0c9aa7e20d184a6110f78735da78e8f25f"
	Dec 08 01:39:53 old-k8s-version-661561 kubelet[783]: I1208 01:39:53.327378     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:39:53 old-k8s-version-661561 kubelet[783]: E1208 01:39:53.327681     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:00 old-k8s-version-661561 kubelet[783]: I1208 01:40:00.366945     783 scope.go:117] "RemoveContainer" containerID="19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa"
	Dec 08 01:40:01 old-k8s-version-661561 kubelet[783]: I1208 01:40:01.829386     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: I1208 01:40:02.375881     783 scope.go:117] "RemoveContainer" containerID="d1173cdaba46b2929f9b20ffc5eb95a15418250c9784e761d654de2ea28faae9"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: I1208 01:40:02.376239     783 scope.go:117] "RemoveContainer" containerID="44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	Dec 08 01:40:02 old-k8s-version-661561 kubelet[783]: E1208 01:40:02.376617     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:11 old-k8s-version-661561 kubelet[783]: I1208 01:40:11.829584     783 scope.go:117] "RemoveContainer" containerID="44ff7c5ba10876a46c2a0acdea8da18384beb60d5d7a3f018a7492acdc34c790"
	Dec 08 01:40:11 old-k8s-version-661561 kubelet[783]: E1208 01:40:11.830370     783 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-pc9v8_kubernetes-dashboard(3bd96566-58d8-476c-95e2-7ba8049d42e1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-pc9v8" podUID="3bd96566-58d8-476c-95e2-7ba8049d42e1"
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:40:18 old-k8s-version-661561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2919f0946ab1b27883a67e0a1d1f724f0c5c22dce6ff3b71fb09c7de4c9f2039] <==
	2025/12/08 01:39:46 Using namespace: kubernetes-dashboard
	2025/12/08 01:39:46 Using in-cluster config to connect to apiserver
	2025/12/08 01:39:46 Using secret token for csrf signing
	2025/12/08 01:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:39:46 Successful initial request to the apiserver, version: v1.28.0
	2025/12/08 01:39:46 Generating JWE encryption key
	2025/12/08 01:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:39:47 Initializing JWE encryption key from synchronized object
	2025/12/08 01:39:47 Creating in-cluster Sidecar client
	2025/12/08 01:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:39:47 Serving insecurely on HTTP port: 9090
	2025/12/08 01:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:39:46 Starting overwatch
	
	
	==> storage-provisioner [19897ecc4f1f9fbc64086800e6142584fd60c43e5b2dcc7a0857b43695c182fa] <==
	I1208 01:39:29.693383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:39:59.696064       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [252c3512ae3866f1479c8caddeae6aa2cc7b4ed75ae08c708c308767303721e6] <==
	I1208 01:40:00.572062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:40:00.607036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:40:00.607213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 01:40:18.013572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:40:18.014876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"986e9fb0-2e44-4a3d-b9f2-86404551ac54", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4 became leader
	I1208 01:40:18.031787       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4!
	I1208 01:40:18.132554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-661561_45d97d49-84ca-45ac-8e23-b399a04bacf4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-661561 -n old-k8s-version-661561
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-661561 -n old-k8s-version-661561: exit status 2 (516.417235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-661561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (518.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m37.324587316s)

                                                
                                                
-- stdout --
	* [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:40:30.339911 1021094 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:40:30.340049 1021094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:30.340061 1021094 out.go:374] Setting ErrFile to fd 2...
	I1208 01:40:30.340066 1021094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:30.340361 1021094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:40:30.340836 1021094 out.go:368] Setting JSON to false
	I1208 01:40:30.341793 1021094 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22963,"bootTime":1765135068,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:40:30.341870 1021094 start.go:143] virtualization:  
	I1208 01:40:30.345721 1021094 out.go:179] * [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:40:30.348912 1021094 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:40:30.349037 1021094 notify.go:221] Checking for updates...
	I1208 01:40:30.354995 1021094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:40:30.357928 1021094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:40:30.360809 1021094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:40:30.363662 1021094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:40:30.366544 1021094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:40:30.369924 1021094 config.go:182] Loaded profile config "cert-expiration-428091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:30.370043 1021094 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:40:30.397510 1021094 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:40:30.397704 1021094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:30.455982 1021094 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:40:30.44688692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:30.456084 1021094 docker.go:319] overlay module found
	I1208 01:40:30.459245 1021094 out.go:179] * Using the docker driver based on user configuration
	I1208 01:40:30.462052 1021094 start.go:309] selected driver: docker
	I1208 01:40:30.462075 1021094 start.go:927] validating driver "docker" against <nil>
	I1208 01:40:30.462089 1021094 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:40:30.462917 1021094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:30.528747 1021094 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 01:40:30.519457822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:30.528919 1021094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:40:30.529132 1021094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:40:30.532402 1021094 out.go:179] * Using Docker driver with root privileges
	I1208 01:40:30.538807 1021094 cni.go:84] Creating CNI manager for ""
	I1208 01:40:30.538937 1021094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:30.538947 1021094 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:40:30.539030 1021094 start.go:353] cluster config:
	{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:30.542172 1021094 out.go:179] * Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	I1208 01:40:30.545111 1021094 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:40:30.548159 1021094 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:40:30.550972 1021094 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:40:30.551101 1021094 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:40:30.551134 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json: {Name:mkdc3f9f1dc20797b07068df976011d2e7bf26ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:30.551303 1021094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:40:30.555110 1021094 cache.go:107] acquiring lock: {Name:mkb488f77623cf5688783098c8af8f37e2ccf2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.555335 1021094 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:30.555540 1021094 cache.go:107] acquiring lock: {Name:mk46c5b5a799bb57ec4fc052703439a88454d6c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.555665 1021094 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:30.555770 1021094 cache.go:107] acquiring lock: {Name:mkd948fd592ac79c85c21b030b5344321f29366e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.555835 1021094 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:30.555949 1021094 cache.go:107] acquiring lock: {Name:mk937612bf3f3168a18ddaac7a61a8bae665cda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.556017 1021094 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:30.556109 1021094 cache.go:107] acquiring lock: {Name:mk12ceb359422aeb489a7c1f33a7ec5ed809694f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.556169 1021094 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:30.556273 1021094 cache.go:107] acquiring lock: {Name:mk26da6a2fb489baaddcecf1a83cf045eefe1b48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.556337 1021094 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1208 01:40:30.556420 1021094 cache.go:107] acquiring lock: {Name:mk855f3a105742255ca91bc6cacb964e2740cdc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.556476 1021094 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:30.556575 1021094 cache.go:107] acquiring lock: {Name:mk695dd8e1a707c0142f2b3898e789d03306fcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.556664 1021094 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:30.563669 1021094 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:30.563882 1021094 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:30.566080 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:30.566462 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:30.566622 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:30.566741 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:30.567325 1021094 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1208 01:40:30.567742 1021094 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:30.578216 1021094 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:40:30.578238 1021094 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:40:30.578252 1021094 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:40:30.578284 1021094 start.go:360] acquireMachinesLock for no-preload-389831: {Name:mkc005fe96402610ac376caa09ffa5218e546ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:30.578380 1021094 start.go:364] duration metric: took 81.01µs to acquireMachinesLock for "no-preload-389831"
	I1208 01:40:30.578405 1021094 start.go:93] Provisioning new machine with config: &{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:40:30.578464 1021094 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:40:30.582905 1021094 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:40:30.583144 1021094 start.go:159] libmachine.API.Create for "no-preload-389831" (driver="docker")
	I1208 01:40:30.583175 1021094 client.go:173] LocalClient.Create starting
	I1208 01:40:30.583235 1021094 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:40:30.583268 1021094 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:30.583286 1021094 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:30.583344 1021094 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:40:30.583360 1021094 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:30.583372 1021094 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:30.583731 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:40:30.617511 1021094 cli_runner.go:211] docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:40:30.617597 1021094 network_create.go:284] running [docker network inspect no-preload-389831] to gather additional debugging logs...
	I1208 01:40:30.617616 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831
	W1208 01:40:30.640046 1021094 cli_runner.go:211] docker network inspect no-preload-389831 returned with exit code 1
	I1208 01:40:30.640078 1021094 network_create.go:287] error running [docker network inspect no-preload-389831]: docker network inspect no-preload-389831: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-389831 not found
	I1208 01:40:30.640091 1021094 network_create.go:289] output of [docker network inspect no-preload-389831]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-389831 not found
	
	** /stderr **
	I1208 01:40:30.640179 1021094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:30.659736 1021094 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:40:30.660060 1021094 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:40:30.660405 1021094 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:40:30.660790 1021094 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c3ba80}
	I1208 01:40:30.660815 1021094 network_create.go:124] attempt to create docker network no-preload-389831 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:40:30.660873 1021094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-389831 no-preload-389831
	I1208 01:40:30.785279 1021094 network_create.go:108] docker network no-preload-389831 192.168.76.0/24 created
	I1208 01:40:30.785310 1021094 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-389831" container
	I1208 01:40:30.785392 1021094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:40:30.805170 1021094 cli_runner.go:164] Run: docker volume create no-preload-389831 --label name.minikube.sigs.k8s.io=no-preload-389831 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:40:30.827543 1021094 oci.go:103] Successfully created a docker volume no-preload-389831
	I1208 01:40:30.827643 1021094 cli_runner.go:164] Run: docker run --rm --name no-preload-389831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --entrypoint /usr/bin/test -v no-preload-389831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:40:30.943056 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:30.943305 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:40:30.945628 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:30.953346 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:40:30.969327 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:40:31.004580 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:40:31.011116 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:31.105258 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:40:31.110837 1021094 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 554.555793ms
	I1208 01:40:31.110891 1021094 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:40:31.358672 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:40:31.358790 1021094 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 802.680779ms
	I1208 01:40:31.358876 1021094 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	W1208 01:40:31.847628 1021094 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:40:31.847676 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:40:31.955136 1021094 cli_runner.go:217] Completed: docker run --rm --name no-preload-389831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --entrypoint /usr/bin/test -v no-preload-389831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.127435438s)
	I1208 01:40:31.955160 1021094 oci.go:107] Successfully prepared a docker volume no-preload-389831
	I1208 01:40:31.955190 1021094 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1208 01:40:31.955323 1021094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:40:31.955427 1021094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:40:31.975281 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:40:31.975360 1021094 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.419412983s
	I1208 01:40:31.975388 1021094 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:40:32.056401 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:40:32.056492 1021094 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.499910533s
	I1208 01:40:32.056518 1021094 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:40:32.083362 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:40:32.083443 1021094 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.527905135s
	I1208 01:40:32.083470 1021094 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:40:32.097637 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:40:32.098131 1021094 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.54235797s
	I1208 01:40:32.098272 1021094 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:40:32.138269 1021094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-389831 --name no-preload-389831 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-389831 --network no-preload-389831 --ip 192.168.76.2 --volume no-preload-389831:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:40:32.148070 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:40:32.148095 1021094 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 1.591676321s
	I1208 01:40:32.148108 1021094 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:40:32.372606 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:40:32.372640 1021094 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.817550811s
	I1208 01:40:32.372654 1021094 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:40:32.372678 1021094 cache.go:87] Successfully saved all images to host disk.
	I1208 01:40:32.598398 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Running}}
	I1208 01:40:32.615462 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:32.648114 1021094 cli_runner.go:164] Run: docker exec no-preload-389831 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:40:32.755468 1021094 oci.go:144] the created container "no-preload-389831" has a running status.
	I1208 01:40:32.755495 1021094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa...
	I1208 01:40:34.036590 1021094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:40:34.057539 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:34.081456 1021094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:40:34.081478 1021094 kic_runner.go:114] Args: [docker exec --privileged no-preload-389831 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:40:34.137445 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:34.157285 1021094 machine.go:94] provisionDockerMachine start ...
	I1208 01:40:34.157383 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:34.178906 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:34.179255 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:34.179278 1021094 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:40:34.179934 1021094 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:40:37.398648 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:40:37.398671 1021094 ubuntu.go:182] provisioning hostname "no-preload-389831"
	I1208 01:40:37.398735 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:37.441103 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:37.441409 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:37.441421 1021094 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-389831 && echo "no-preload-389831" | sudo tee /etc/hostname
	I1208 01:40:37.646779 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:40:37.647122 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:37.688959 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:37.689261 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:37.689277 1021094 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-389831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-389831/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-389831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:40:37.879741 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:40:37.879770 1021094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:40:37.879797 1021094 ubuntu.go:190] setting up certificates
	I1208 01:40:37.879807 1021094 provision.go:84] configureAuth start
	I1208 01:40:37.879866 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:37.908731 1021094 provision.go:143] copyHostCerts
	I1208 01:40:37.908802 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:40:37.908812 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:40:37.908887 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:40:37.908979 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:40:37.908984 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:40:37.909007 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:40:37.909057 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:40:37.909061 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:40:37.909085 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:40:37.909128 1021094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.no-preload-389831 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-389831]
	I1208 01:40:38.740328 1021094 provision.go:177] copyRemoteCerts
	I1208 01:40:38.740403 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:40:38.740448 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:38.756928 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:38.862526 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:40:38.879576 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:40:38.896985 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:40:38.915461 1021094 provision.go:87] duration metric: took 1.035629128s to configureAuth
	I1208 01:40:38.915542 1021094 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:40:38.915744 1021094 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:40:38.915859 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:38.933685 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:38.934003 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:38.934026 1021094 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:40:39.318219 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:40:39.318243 1021094 machine.go:97] duration metric: took 5.160935414s to provisionDockerMachine
	I1208 01:40:39.318254 1021094 client.go:176] duration metric: took 8.735073014s to LocalClient.Create
	I1208 01:40:39.318271 1021094 start.go:167] duration metric: took 8.735127964s to libmachine.API.Create "no-preload-389831"
	I1208 01:40:39.318278 1021094 start.go:293] postStartSetup for "no-preload-389831" (driver="docker")
	I1208 01:40:39.318290 1021094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:40:39.318354 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:40:39.318404 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.340613 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.446809 1021094 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:40:39.450174 1021094 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:40:39.450249 1021094 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:40:39.450268 1021094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:40:39.450331 1021094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:40:39.450414 1021094 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:40:39.450525 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:40:39.457932 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:39.475530 1021094 start.go:296] duration metric: took 157.236775ms for postStartSetup
	I1208 01:40:39.475890 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:39.492406 1021094 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:40:39.492698 1021094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:40:39.492748 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.509119 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.611848 1021094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:40:39.616363 1021094 start.go:128] duration metric: took 9.03788352s to createHost
	I1208 01:40:39.616392 1021094 start.go:83] releasing machines lock for "no-preload-389831", held for 9.03800294s
	I1208 01:40:39.616474 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:39.637241 1021094 ssh_runner.go:195] Run: cat /version.json
	I1208 01:40:39.637302 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.637554 1021094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:40:39.637619 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.658434 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.664626 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.762364 1021094 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:39.858979 1021094 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:40:39.891167 1021094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:40:39.895381 1021094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:40:39.895456 1021094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:40:39.923048 1021094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:40:39.923069 1021094 start.go:496] detecting cgroup driver to use...
	I1208 01:40:39.923103 1021094 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:40:39.923181 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:40:39.940954 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:40:39.953362 1021094 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:40:39.953456 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:40:39.970866 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:40:39.987984 1021094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:40:40.115128 1021094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:40:40.240743 1021094 docker.go:234] disabling docker service ...
	I1208 01:40:40.240809 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:40:40.261355 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:40:40.274583 1021094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:40:40.395150 1021094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:40:40.515923 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:40:40.528632 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:40:40.542509 1021094 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:40:40.542613 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.551518 1021094 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:40:40.551618 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.561009 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.569512 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.578268 1021094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:40:40.586416 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.595265 1021094 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.608588 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.617344 1021094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:40:40.624856 1021094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:40:40.632091 1021094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:40.748541 1021094 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:40:40.922344 1021094 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:40:40.922413 1021094 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:40:40.926490 1021094 start.go:564] Will wait 60s for crictl version
	I1208 01:40:40.926588 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:40.931188 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:40:40.958216 1021094 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:40:40.958332 1021094 ssh_runner.go:195] Run: crio --version
	I1208 01:40:40.988402 1021094 ssh_runner.go:195] Run: crio --version
	I1208 01:40:41.044202 1021094 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:40:41.046973 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:41.069224 1021094 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:40:41.073380 1021094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:41.086258 1021094 kubeadm.go:884] updating cluster {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:40:41.086377 1021094 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:40:41.086429 1021094 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:41.121676 1021094 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:40:41.121706 1021094 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1208 01:40:41.121762 1021094 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:41.121782 1021094 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.121959 1021094 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.121964 1021094 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.122042 1021094 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.122057 1021094 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.122121 1021094 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.122138 1021094 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.123467 1021094 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.123716 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.123841 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.123954 1021094 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.124078 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.124198 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.124316 1021094 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:41.124447 1021094 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.358730 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.378407 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.397963 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.411580 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.418810 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.459685 1021094 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1208 01:40:41.459721 1021094 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.459767 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.463120 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1208 01:40:41.467088 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.487448 1021094 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1208 01:40:41.487487 1021094 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.487544 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.535156 1021094 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1208 01:40:41.535196 1021094 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.535244 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.629655 1021094 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1208 01:40:41.629693 1021094 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.629745 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.644365 1021094 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1208 01:40:41.644408 1021094 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.644461 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.644529 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.653325 1021094 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1208 01:40:41.653370 1021094 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.653416 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.670358 1021094 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1208 01:40:41.670395 1021094 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.670443 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.670509 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.670562 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.670625 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.733494 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.733576 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.733643 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:41.804067 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.804138 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.804193 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.804248 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.897631 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.897714 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:42.016738 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:42.016995 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:42.017090 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:42.017170 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:42.017251 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:42.090016 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:42.096041 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:40:42.096158 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:42.239473 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:40:42.239607 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:42.239709 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:42.239768 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:42.239817 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:42.239880 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:42.239924 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:42.239975 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:42.283121 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:40:42.283225 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:42.283286 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1208 01:40:42.283299 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1208 01:40:42.384700 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.384789 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1208 01:40:42.384891 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1208 01:40:42.384923 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1208 01:40:42.385024 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:40:42.385134 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.385214 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.385264 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1208 01:40:42.385354 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:42.385441 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:42.385537 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.385585 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	W1208 01:40:42.408987 1021094 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:40:42.409159 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:42.448948 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.448991 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1208 01:40:42.449028 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1208 01:40:42.449041 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1208 01:40:42.655244 1021094 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1208 01:40:42.655295 1021094 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:42.655346 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:42.702200 1021094 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.702267 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.769661 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:43.233664 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:43.233763 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1208 01:40:43.233839 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:43.233907 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:45.289146 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.055202589s)
	I1208 01:40:45.289180 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1208 01:40:45.289198 1021094 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:45.289255 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:45.289318 1021094 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055593018s)
	I1208 01:40:45.289362 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:47.879635 1021094 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.590245918s)
	I1208 01:40:47.879676 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:40:47.879768 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:47.879905 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.590635429s)
	I1208 01:40:47.879913 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1208 01:40:47.879931 1021094 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:47.879959 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:49.505230 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.625251106s)
	I1208 01:40:49.505256 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1208 01:40:49.505274 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:49.505322 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:49.505384 1021094 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.625606901s)
	I1208 01:40:49.505402 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1208 01:40:49.505417 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1208 01:40:51.515481 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.010138753s)
	I1208 01:40:51.515506 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1208 01:40:51.515534 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:51.515595 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:53.283994 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.768373302s)
	I1208 01:40:53.284022 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1208 01:40:53.284040 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:53.284087 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:54.776767 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.492643815s)
	I1208 01:40:54.776790 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1208 01:40:54.776809 1021094 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:54.776887 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:55.542261 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1208 01:40:55.542302 1021094 cache_images.go:125] Successfully loaded all cached images
	I1208 01:40:55.542334 1021094 cache_images.go:94] duration metric: took 14.42058748s to LoadCachedImages
	I1208 01:40:55.542353 1021094 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:40:55.542494 1021094 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-389831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:40:55.542618 1021094 ssh_runner.go:195] Run: crio config
	I1208 01:40:55.608304 1021094 cni.go:84] Creating CNI manager for ""
	I1208 01:40:55.608330 1021094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:55.608371 1021094 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:40:55.608409 1021094 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-389831 NodeName:no-preload-389831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:40:55.608604 1021094 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-389831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:40:55.608716 1021094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:40:55.618315 1021094 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1208 01:40:55.618427 1021094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:40:55.626985 1021094 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1208 01:40:55.627359 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1208 01:40:55.627952 1021094 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1208 01:40:55.628367 1021094 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1208 01:40:55.633443 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1208 01:40:55.633488 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1208 01:40:56.620488 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:40:56.639267 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1208 01:40:56.644242 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1208 01:40:56.644297 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1208 01:40:56.732411 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1208 01:40:56.742026 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1208 01:40:56.742074 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1208 01:40:57.367860 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:40:57.386337 1021094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:40:57.406473 1021094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:40:57.421029 1021094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:40:57.434825 1021094 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:40:57.439223 1021094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:57.449248 1021094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:57.640067 1021094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:40:57.685241 1021094 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831 for IP: 192.168.76.2
	I1208 01:40:57.685310 1021094 certs.go:195] generating shared ca certs ...
	I1208 01:40:57.685341 1021094 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.685534 1021094 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:40:57.685603 1021094 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:40:57.685624 1021094 certs.go:257] generating profile certs ...
	I1208 01:40:57.685707 1021094 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key
	I1208 01:40:57.685741 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt with IP's: []
	I1208 01:40:57.798040 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt ...
	I1208 01:40:57.798074 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt: {Name:mk43ee9cb64d4d36ddab24e767a95ef0e5d2d3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.798305 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key ...
	I1208 01:40:57.798320 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key: {Name:mk7d00067baa29a2737ac83ba8ddb47ef30348a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.798425 1021094 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e
	I1208 01:40:57.798444 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:40:57.927141 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e ...
	I1208 01:40:57.927177 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e: {Name:mk4240accc36220fe97de733b0df0bcfda683f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.927358 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e ...
	I1208 01:40:57.927376 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e: {Name:mk4ddd25b3ece0ab46286ccbd524a494c1408f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.927455 1021094 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt
	I1208 01:40:57.927535 1021094 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key
	I1208 01:40:57.927600 1021094 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key
	I1208 01:40:57.927621 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt with IP's: []
	I1208 01:40:58.305332 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt ...
	I1208 01:40:58.305364 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt: {Name:mkf0fe7312d071deb211429779cb97fae64ec03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:58.305547 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key ...
	I1208 01:40:58.305567 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key: {Name:mk723b5789ca1fc903f77c35a669827cb9f89c85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:58.305758 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:40:58.305812 1021094 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:40:58.305827 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:40:58.305855 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:40:58.305886 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:40:58.305917 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:40:58.305971 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:58.306535 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:40:58.337508 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:40:58.365804 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:40:58.401533 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:40:58.435305 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:40:58.468794 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:40:58.505545 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:40:58.558346 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:40:58.600280 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:40:58.641629 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:40:58.680780 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:40:58.705473 1021094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:40:58.737548 1021094 ssh_runner.go:195] Run: openssl version
	I1208 01:40:58.751909 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.764191 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:40:58.778005 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.786094 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.786162 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.859688 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:40:58.870971 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:40:58.880979 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.896850 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:40:58.904951 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.909107 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.909180 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.956042 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:58.963909 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:58.971511 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.979389 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:40:58.987098 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.991291 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.991411 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:59.034640 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:40:59.042908 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:40:59.050588 1021094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:40:59.054740 1021094 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:40:59.054795 1021094 kubeadm.go:401] StartCluster: {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:59.054883 1021094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:40:59.054946 1021094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:40:59.113395 1021094 cri.go:89] found id: ""
	I1208 01:40:59.113462 1021094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:40:59.122084 1021094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:40:59.130170 1021094 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:40:59.130233 1021094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:40:59.145103 1021094 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:40:59.145130 1021094 kubeadm.go:158] found existing configuration files:
	
	I1208 01:40:59.145190 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:40:59.153776 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:40:59.153841 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:40:59.161501 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:40:59.174279 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:40:59.174390 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:40:59.189242 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:40:59.199847 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:40:59.199960 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:40:59.214078 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:40:59.227057 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:40:59.227167 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:40:59.236769 1021094 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:40:59.332768 1021094 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:40:59.333181 1021094 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:40:59.479226 1021094 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:40:59.479352 1021094 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:40:59.479415 1021094 kubeadm.go:319] OS: Linux
	I1208 01:40:59.479478 1021094 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:40:59.479553 1021094 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:40:59.479616 1021094 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:40:59.479686 1021094 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:40:59.479769 1021094 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:40:59.479854 1021094 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:40:59.479966 1021094 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:40:59.480053 1021094 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:40:59.480144 1021094 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:40:59.575913 1021094 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:40:59.576029 1021094 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:40:59.576124 1021094 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:40:59.619232 1021094 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:40:59.626183 1021094 out.go:252]   - Generating certificates and keys ...
	I1208 01:40:59.626288 1021094 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:40:59.626359 1021094 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:40:59.794659 1021094 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:40:59.983167 1021094 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:41:00.099229 1021094 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:41:00.511838 1021094 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:41:01.492176 1021094 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:41:01.492685 1021094 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:41:01.626325 1021094 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:41:01.626939 1021094 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:41:01.840581 1021094 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:41:02.091995 1021094 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:41:02.170533 1021094 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:41:02.171147 1021094 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:41:02.777854 1021094 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:41:03.544958 1021094 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:41:03.918083 1021094 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:41:04.119224 1021094 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:41:04.441753 1021094 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:41:04.442881 1021094 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:41:04.445832 1021094 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:41:04.449779 1021094 out.go:252]   - Booting up control plane ...
	I1208 01:41:04.449901 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:41:04.449985 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:41:04.454509 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:41:04.478904 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:41:04.479017 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:41:04.490174 1021094 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:41:04.490806 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:41:04.491291 1021094 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:41:04.692853 1021094 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:41:04.692979 1021094 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:04.693041 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000483408s
	I1208 01:45:04.693071 1021094 kubeadm.go:319] 
	I1208 01:45:04.693129 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:04.693168 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:04.693276 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:04.693285 1021094 kubeadm.go:319] 
	I1208 01:45:04.693390 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:04.693426 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:04.693459 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:04.693468 1021094 kubeadm.go:319] 
	I1208 01:45:04.699220 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:04.699716 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:04.699840 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:04.700103 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:04.700113 1021094 kubeadm.go:319] 
	I1208 01:45:04.700188 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:04.700316 1021094 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000483408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000483408s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:45:04.700422 1021094 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 01:45:05.126409 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:45:05.144980 1021094 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:45:05.145055 1021094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:45:05.154025 1021094 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:45:05.154045 1021094 kubeadm.go:158] found existing configuration files:
	
	I1208 01:45:05.154097 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:45:05.163100 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:45:05.163164 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:45:05.171551 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:45:05.180730 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:45:05.180796 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:45:05.189259 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:45:05.197476 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:45:05.197543 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:45:05.205068 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:45:05.213320 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:45:05.213390 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:45:05.221088 1021094 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:45:05.260882 1021094 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:45:05.260981 1021094 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:45:05.339651 1021094 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:45:05.339727 1021094 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:45:05.339765 1021094 kubeadm.go:319] OS: Linux
	I1208 01:45:05.339812 1021094 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:45:05.339860 1021094 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:45:05.339909 1021094 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:45:05.339957 1021094 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:45:05.340007 1021094 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:45:05.340061 1021094 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:45:05.340107 1021094 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:45:05.340155 1021094 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:45:05.340203 1021094 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:45:05.412153 1021094 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:45:05.412267 1021094 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:45:05.412363 1021094 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:45:05.427370 1021094 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:45:05.435207 1021094 out.go:252]   - Generating certificates and keys ...
	I1208 01:45:05.435371 1021094 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:45:05.435456 1021094 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:45:05.435556 1021094 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:45:05.435636 1021094 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:45:05.435730 1021094 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:45:05.435803 1021094 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:45:05.435901 1021094 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:45:05.435984 1021094 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:45:05.436081 1021094 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:45:05.436186 1021094 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:45:05.436244 1021094 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:45:05.436330 1021094 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:45:06.019386 1021094 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:45:06.173129 1021094 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:45:06.545821 1021094 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:45:06.782921 1021094 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:45:06.973417 1021094 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:45:06.973888 1021094 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:45:06.977327 1021094 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:45:06.982829 1021094 out.go:252]   - Booting up control plane ...
	I1208 01:45:06.982998 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:45:06.983114 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:45:06.983210 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:45:07.001102 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:45:07.001212 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:45:07.010682 1021094 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:45:07.010979 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:45:07.011157 1021094 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:45:07.154935 1021094 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:45:07.155051 1021094 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:49:07.156332 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001184239s
	I1208 01:49:07.156375 1021094 kubeadm.go:319] 
	I1208 01:49:07.156475 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:49:07.156683 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:49:07.156865 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:49:07.156875 1021094 kubeadm.go:319] 
	I1208 01:49:07.157056 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:49:07.157354 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:49:07.157410 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:49:07.157416 1021094 kubeadm.go:319] 
	I1208 01:49:07.162909 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:49:07.163434 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:49:07.163569 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:49:07.163832 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:49:07.163845 1021094 kubeadm.go:319] 
	I1208 01:49:07.163964 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:49:07.163990 1021094 kubeadm.go:403] duration metric: took 8m8.109200094s to StartCluster
	I1208 01:49:07.164030 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:49:07.164092 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:49:07.189444 1021094 cri.go:89] found id: ""
	I1208 01:49:07.189467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.189475 1021094 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:49:07.189482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:49:07.189545 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:49:07.214553 1021094 cri.go:89] found id: ""
	I1208 01:49:07.214578 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.214586 1021094 logs.go:284] No container was found matching "etcd"
	I1208 01:49:07.214592 1021094 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:49:07.214652 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:49:07.240730 1021094 cri.go:89] found id: ""
	I1208 01:49:07.240765 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.240774 1021094 logs.go:284] No container was found matching "coredns"
	I1208 01:49:07.240780 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:49:07.240877 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:49:07.275951 1021094 cri.go:89] found id: ""
	I1208 01:49:07.275976 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.275984 1021094 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:49:07.275991 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:49:07.276048 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:49:07.308446 1021094 cri.go:89] found id: ""
	I1208 01:49:07.308467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.308476 1021094 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:49:07.308482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:49:07.308544 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:49:07.337708 1021094 cri.go:89] found id: ""
	I1208 01:49:07.337730 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.337738 1021094 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:49:07.337744 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:49:07.337804 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:49:07.365399 1021094 cri.go:89] found id: ""
	I1208 01:49:07.365420 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.365428 1021094 logs.go:284] No container was found matching "kindnet"
	I1208 01:49:07.365438 1021094 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:49:07.365449 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:49:07.429624 1021094 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:49:07.429646 1021094 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:49:07.429657 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:49:07.471772 1021094 logs.go:123] Gathering logs for container status ...
	I1208 01:49:07.471809 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:49:07.507231 1021094 logs.go:123] Gathering logs for kubelet ...
	I1208 01:49:07.507258 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:49:07.572140 1021094 logs.go:123] Gathering logs for dmesg ...
	I1208 01:49:07.572179 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:49:07.589992 1021094 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:49:07.590043 1021094 out.go:285] * 
	* 
	W1208 01:49:07.590093 1021094 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.590111 1021094 out.go:285] * 
	* 
	W1208 01:49:07.592441 1021094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:49:07.598676 1021094 out.go:203] 
	W1208 01:49:07.601501 1021094 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.601539 1021094 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:49:07.601583 1021094 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:49:07.604654 1021094 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:40:32.261581076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79193c30e8ff7cdcf99f747e987c12c0c02ab2d4b1e09c1f844845ffd7e244c8",
	            "SandboxKey": "/var/run/docker/netns/79193c30e8ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a7:b4:4f:0b:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "ac3963043985cb3c4beb5ad7f93727fc9a3cc524dd93131be5af0216706250c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 6 (365.510941ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:08.058137 1044082 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                            │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:46:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:46:29.329866 1039943 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:29.330081 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330108 1039943 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:29.330126 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330385 1039943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:29.330823 1039943 out.go:368] Setting JSON to false
	I1208 01:46:29.331797 1039943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23322,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:46:29.331896 1039943 start.go:143] virtualization:  
	I1208 01:46:29.336178 1039943 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:46:29.339647 1039943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:46:29.339692 1039943 notify.go:221] Checking for updates...
	I1208 01:46:29.343070 1039943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:46:29.346748 1039943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:46:29.349908 1039943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:46:29.353489 1039943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:46:29.356725 1039943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:46:29.360434 1039943 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:29.360559 1039943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:46:29.382085 1039943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:46:29.382198 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.440774 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.431745879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.440872 1039943 docker.go:319] overlay module found
	I1208 01:46:29.444115 1039943 out.go:179] * Using the docker driver based on user configuration
	I1208 01:46:29.447050 1039943 start.go:309] selected driver: docker
	I1208 01:46:29.447088 1039943 start.go:927] validating driver "docker" against <nil>
	I1208 01:46:29.447103 1039943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:46:29.447822 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.513492 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.504737954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.513651 1039943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:46:29.513674 1039943 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:46:29.513890 1039943 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:46:29.517063 1039943 out.go:179] * Using Docker driver with root privileges
	I1208 01:46:29.519963 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:29.520039 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:29.520052 1039943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:46:29.520136 1039943 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:29.523357 1039943 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:46:29.526151 1039943 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:46:29.529015 1039943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:46:29.531940 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:29.532005 1039943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:46:29.532021 1039943 cache.go:65] Caching tarball of preloaded images
	I1208 01:46:29.532026 1039943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:46:29.532106 1039943 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:46:29.532117 1039943 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:46:29.532224 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:29.532242 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json: {Name:mk18f08541f75fcff1b0d7777fe02845efecf137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:29.551296 1039943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:46:29.551320 1039943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:46:29.551340 1039943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:46:29.551371 1039943 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:46:29.551480 1039943 start.go:364] duration metric: took 87.493µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:46:29.551523 1039943 start.go:93] Provisioning new machine with config: &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:46:29.551657 1039943 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:46:29.555023 1039943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:46:29.555251 1039943 start.go:159] libmachine.API.Create for "newest-cni-448023" (driver="docker")
	I1208 01:46:29.555289 1039943 client.go:173] LocalClient.Create starting
	I1208 01:46:29.555374 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:46:29.555413 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555432 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555492 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:46:29.555518 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555535 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555895 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:46:29.572337 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:46:29.572449 1039943 network_create.go:284] running [docker network inspect newest-cni-448023] to gather additional debugging logs...
	I1208 01:46:29.572473 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023
	W1208 01:46:29.587652 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 returned with exit code 1
	I1208 01:46:29.587681 1039943 network_create.go:287] error running [docker network inspect newest-cni-448023]: docker network inspect newest-cni-448023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-448023 not found
	I1208 01:46:29.587697 1039943 network_create.go:289] output of [docker network inspect newest-cni-448023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-448023 not found
	
	** /stderr **
	I1208 01:46:29.587791 1039943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:29.603250 1039943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:46:29.603598 1039943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:46:29.603957 1039943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:46:29.604235 1039943 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:46:29.604628 1039943 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6ec0}
	I1208 01:46:29.604652 1039943 network_create.go:124] attempt to create docker network newest-cni-448023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:46:29.604709 1039943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-448023 newest-cni-448023
	I1208 01:46:29.659267 1039943 network_create.go:108] docker network newest-cni-448023 192.168.85.0/24 created
	I1208 01:46:29.659307 1039943 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-448023" container
	I1208 01:46:29.659395 1039943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:46:29.675118 1039943 cli_runner.go:164] Run: docker volume create newest-cni-448023 --label name.minikube.sigs.k8s.io=newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:46:29.693502 1039943 oci.go:103] Successfully created a docker volume newest-cni-448023
	I1208 01:46:29.693603 1039943 cli_runner.go:164] Run: docker run --rm --name newest-cni-448023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --entrypoint /usr/bin/test -v newest-cni-448023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:46:30.260940 1039943 oci.go:107] Successfully prepared a docker volume newest-cni-448023
	I1208 01:46:30.261013 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:30.261031 1039943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:46:30.261099 1039943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:46:34.244465 1039943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983325366s)
	I1208 01:46:34.244500 1039943 kic.go:203] duration metric: took 3.983465364s to extract preloaded images to volume ...
	W1208 01:46:34.244633 1039943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:46:34.244781 1039943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:46:34.337950 1039943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-448023 --name newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-448023 --network newest-cni-448023 --ip 192.168.85.2 --volume newest-cni-448023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:46:34.625342 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Running}}
	I1208 01:46:34.649912 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.674400 1039943 cli_runner.go:164] Run: docker exec newest-cni-448023 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:46:34.723723 1039943 oci.go:144] the created container "newest-cni-448023" has a running status.
	I1208 01:46:34.723752 1039943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa...
	I1208 01:46:34.892140 1039943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:46:34.912965 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.938479 1039943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:46:34.938507 1039943 kic_runner.go:114] Args: [docker exec --privileged newest-cni-448023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:46:35.028018 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:35.058920 1039943 machine.go:94] provisionDockerMachine start ...
	I1208 01:46:35.059025 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:35.099088 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:35.099448 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:35.099466 1039943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:46:35.100020 1039943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47050->127.0.0.1:33807: read: connection reset by peer
	I1208 01:46:38.254334 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.254358 1039943 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:46:38.254421 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.272041 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.272365 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.272382 1039943 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:46:38.436500 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.436590 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.453974 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.454288 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.454304 1039943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:46:38.607227 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:46:38.607264 1039943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:46:38.607291 1039943 ubuntu.go:190] setting up certificates
	I1208 01:46:38.607301 1039943 provision.go:84] configureAuth start
	I1208 01:46:38.607362 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:38.623687 1039943 provision.go:143] copyHostCerts
	I1208 01:46:38.623751 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:46:38.623766 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:46:38.623843 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:46:38.623946 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:46:38.623958 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:46:38.623995 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:46:38.624062 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:46:38.624071 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:46:38.624096 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:46:38.624155 1039943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:46:38.807873 1039943 provision.go:177] copyRemoteCerts
	I1208 01:46:38.807949 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:46:38.808001 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.828753 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:38.934898 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:46:38.952864 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:46:38.970012 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:46:38.987418 1039943 provision.go:87] duration metric: took 380.093979ms to configureAuth
	I1208 01:46:38.987489 1039943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:46:38.987701 1039943 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:38.987812 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.021586 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:39.021916 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:39.021944 1039943 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:46:39.335041 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:46:39.335061 1039943 machine.go:97] duration metric: took 4.276119883s to provisionDockerMachine
	I1208 01:46:39.335070 1039943 client.go:176] duration metric: took 9.779771841s to LocalClient.Create
	I1208 01:46:39.335086 1039943 start.go:167] duration metric: took 9.779836023s to libmachine.API.Create "newest-cni-448023"
	I1208 01:46:39.335093 1039943 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:46:39.335105 1039943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:46:39.335174 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:46:39.335220 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.352266 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.458536 1039943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:46:39.461608 1039943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:46:39.461639 1039943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:46:39.461650 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:46:39.461705 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:46:39.461789 1039943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:46:39.461894 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:46:39.469247 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:39.486243 1039943 start.go:296] duration metric: took 151.134201ms for postStartSetup
	I1208 01:46:39.486633 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.504855 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:39.505123 1039943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:46:39.505164 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.523441 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.627950 1039943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:46:39.632598 1039943 start.go:128] duration metric: took 10.080925153s to createHost
	I1208 01:46:39.632621 1039943 start.go:83] releasing machines lock for "newest-cni-448023", held for 10.081126738s
	I1208 01:46:39.632691 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.652131 1039943 ssh_runner.go:195] Run: cat /version.json
	I1208 01:46:39.652157 1039943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:46:39.652183 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.652218 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.681809 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.682602 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.869694 1039943 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:39.876126 1039943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:46:39.913719 1039943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:46:39.918384 1039943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:46:39.918458 1039943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:46:39.947242 1039943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:46:39.947265 1039943 start.go:496] detecting cgroup driver to use...
	I1208 01:46:39.947298 1039943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:46:39.947349 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:46:39.965768 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:46:39.978168 1039943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:46:39.978234 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:46:39.995812 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:46:40.019051 1039943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:46:40.157466 1039943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:46:40.288788 1039943 docker.go:234] disabling docker service ...
	I1208 01:46:40.288897 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:46:40.314027 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:46:40.329209 1039943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:46:40.468296 1039943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:46:40.591028 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:46:40.604723 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:46:40.618613 1039943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:46:40.618699 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.627724 1039943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:46:40.627809 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.637292 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.646718 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.656124 1039943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:46:40.664289 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.672999 1039943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.686929 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.695637 1039943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:46:40.703116 1039943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:46:40.710332 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:40.834286 1039943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:46:41.006471 1039943 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:46:41.006581 1039943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:46:41.017809 1039943 start.go:564] Will wait 60s for crictl version
	I1208 01:46:41.017944 1039943 ssh_runner.go:195] Run: which crictl
	I1208 01:46:41.022606 1039943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:46:41.056937 1039943 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:46:41.057065 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.093495 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.124549 1039943 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:46:41.127395 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:41.143475 1039943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:46:41.147287 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.159892 1039943 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:46:41.162523 1039943 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:46:41.162667 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:41.162750 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.195193 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.195217 1039943 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:46:41.195275 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.220173 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.220196 1039943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:46:41.220203 1039943 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:46:41.220293 1039943 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:46:41.220379 1039943 ssh_runner.go:195] Run: crio config
	I1208 01:46:41.279892 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:41.279918 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:41.279934 1039943 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:46:41.279985 1039943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:46:41.280144 1039943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:46:41.280222 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:46:41.287843 1039943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:46:41.287924 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:46:41.295456 1039943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:46:41.308022 1039943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:46:41.324403 1039943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:46:41.337573 1039943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:46:41.341125 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.350760 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:41.469701 1039943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:46:41.486526 1039943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:46:41.486549 1039943 certs.go:195] generating shared ca certs ...
	I1208 01:46:41.486570 1039943 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.486758 1039943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:46:41.486827 1039943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:46:41.486867 1039943 certs.go:257] generating profile certs ...
	I1208 01:46:41.486942 1039943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:46:41.486953 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt with IP's: []
	I1208 01:46:41.756525 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt ...
	I1208 01:46:41.756551 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt: {Name:mk0603ae5124c088a63c1752061db6508bab22f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756725 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key ...
	I1208 01:46:41.756733 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key: {Name:mkca461b7eac0897c193e0836f61829f4e9d4b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756813 1039943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:46:41.756826 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:46:41.854144 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e ...
	I1208 01:46:41.854175 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e: {Name:mk808166fcccc166bf8bbe144226f9daaa100961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854378 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e ...
	I1208 01:46:41.854395 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e: {Name:mkad238fa32487b653b0a9f151377065f0951a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854489 1039943 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt
	I1208 01:46:41.854571 1039943 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key
	I1208 01:46:41.854631 1039943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:46:41.854650 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt with IP's: []
	I1208 01:46:42.097939 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt ...
	I1208 01:46:42.097979 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt: {Name:mk99d1d19a981d57bf4d12a2cb81e3e53a22a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098217 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key ...
	I1208 01:46:42.098235 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key: {Name:mk0c7b8d27fa7ac473db57ad4f3abf32e11a6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098441 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:46:42.098497 1039943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:46:42.098508 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:46:42.098536 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:46:42.098564 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:46:42.098594 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:46:42.098649 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:42.099505 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:46:42.123800 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:46:42.149931 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:46:42.172486 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:46:42.204182 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:46:42.225772 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:46:42.248373 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:46:42.277328 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:46:42.301927 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:46:42.325492 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:46:42.345377 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:46:42.363969 1039943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:46:42.376790 1039943 ssh_runner.go:195] Run: openssl version
	I1208 01:46:42.383055 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.390479 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:46:42.397965 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401796 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401919 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.443135 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:46:42.450626 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:46:42.458240 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.465745 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:46:42.473315 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477290 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477357 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.518810 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:46:42.527316 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:46:42.538286 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.547106 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:46:42.555430 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560073 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560165 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.601377 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.609019 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.616650 1039943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:46:42.620441 1039943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:46:42.620500 1039943 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:42.620585 1039943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:46:42.620649 1039943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:46:42.649932 1039943 cri.go:89] found id: ""
	I1208 01:46:42.650013 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:46:42.657890 1039943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:46:42.665577 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:46:42.665663 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:46:42.673380 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:46:42.673399 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:46:42.673455 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:46:42.681009 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:46:42.681082 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:46:42.688582 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:46:42.696709 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:46:42.696788 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:46:42.704191 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.711702 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:46:42.711814 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.719024 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:46:42.726923 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:46:42.727007 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:46:42.734562 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:46:42.771766 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:46:42.772014 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:46:42.846706 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:46:42.846791 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:46:42.846859 1039943 kubeadm.go:319] OS: Linux
	I1208 01:46:42.846914 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:46:42.846982 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:46:42.847042 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:46:42.847102 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:46:42.847163 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:46:42.847225 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:46:42.847283 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:46:42.847345 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:46:42.847396 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:46:42.914142 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:46:42.914273 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:46:42.914365 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:46:42.927340 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:46:42.933605 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:46:42.933772 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:46:42.933880 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:46:43.136966 1039943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:46:43.328738 1039943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:46:43.732500 1039943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:46:43.956866 1039943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:46:44.129125 1039943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:46:44.129375 1039943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.337195 1039943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:46:44.337494 1039943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.588532 1039943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:46:44.954533 1039943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:46:45.238719 1039943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:46:45.239782 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:46:45.718662 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:46:45.762985 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:46:46.020127 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:46:46.317772 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:46:46.545386 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:46:46.546080 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:46:46.549393 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:46:46.552921 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:46:46.553058 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:46:46.553140 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:46:46.553786 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:46:46.570986 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:46:46.571335 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:46:46.579342 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:46:46.579896 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:46:46.580195 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:46:46.716587 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:46:46.716716 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:49:07.156332 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001184239s
	I1208 01:49:07.156375 1021094 kubeadm.go:319] 
	I1208 01:49:07.156475 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:49:07.156683 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:49:07.156865 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:49:07.156875 1021094 kubeadm.go:319] 
	I1208 01:49:07.157056 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:49:07.157354 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:49:07.157410 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:49:07.157416 1021094 kubeadm.go:319] 
	I1208 01:49:07.162909 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:49:07.163434 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:49:07.163569 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:49:07.163832 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:49:07.163845 1021094 kubeadm.go:319] 
	I1208 01:49:07.163964 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:49:07.163990 1021094 kubeadm.go:403] duration metric: took 8m8.109200094s to StartCluster
	I1208 01:49:07.164030 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:49:07.164092 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:49:07.189444 1021094 cri.go:89] found id: ""
	I1208 01:49:07.189467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.189475 1021094 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:49:07.189482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:49:07.189545 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:49:07.214553 1021094 cri.go:89] found id: ""
	I1208 01:49:07.214578 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.214586 1021094 logs.go:284] No container was found matching "etcd"
	I1208 01:49:07.214592 1021094 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:49:07.214652 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:49:07.240730 1021094 cri.go:89] found id: ""
	I1208 01:49:07.240765 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.240774 1021094 logs.go:284] No container was found matching "coredns"
	I1208 01:49:07.240780 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:49:07.240877 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:49:07.275951 1021094 cri.go:89] found id: ""
	I1208 01:49:07.275976 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.275984 1021094 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:49:07.275991 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:49:07.276048 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:49:07.308446 1021094 cri.go:89] found id: ""
	I1208 01:49:07.308467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.308476 1021094 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:49:07.308482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:49:07.308544 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:49:07.337708 1021094 cri.go:89] found id: ""
	I1208 01:49:07.337730 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.337738 1021094 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:49:07.337744 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:49:07.337804 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:49:07.365399 1021094 cri.go:89] found id: ""
	I1208 01:49:07.365420 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.365428 1021094 logs.go:284] No container was found matching "kindnet"
	I1208 01:49:07.365438 1021094 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:49:07.365449 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:49:07.429624 1021094 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:49:07.429646 1021094 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:49:07.429657 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:49:07.471772 1021094 logs.go:123] Gathering logs for container status ...
	I1208 01:49:07.471809 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:49:07.507231 1021094 logs.go:123] Gathering logs for kubelet ...
	I1208 01:49:07.507258 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:49:07.572140 1021094 logs.go:123] Gathering logs for dmesg ...
	I1208 01:49:07.572179 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:49:07.589992 1021094 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:49:07.590043 1021094 out.go:285] * 
	W1208 01:49:07.590093 1021094 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.590111 1021094 out.go:285] * 
	W1208 01:49:07.592441 1021094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:49:07.598676 1021094 out.go:203] 
	W1208 01:49:07.601501 1021094 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.601539 1021094 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:49:07.601583 1021094 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:49:07.604654 1021094 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368154779Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368198923Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.013925665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014099747Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014170156Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.265576665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.26604081Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.266101947Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338552201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338884118Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338939799Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.58396125Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fe4987d-fa68-4798-80d2-b6f670609a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.599048175Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=af362a5f-b1e8-40fc-9b9b-22ea72b61af9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.601243245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d8a1b229-d4f4-4c3b-92fb-098f8f0fb136 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.60654358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0bb15a41-3aee-43e0-bbf9-fda78b30c461 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.607953861Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=981c34d5-0cb0-4db8-9c75-23c9d8d2cd19 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.611594321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a9dea912-c284-4838-a031-472efe431421 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.615047193Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fa013a89-c419-4775-97ab-ba118f73c5bc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.415842018Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5cc098c-7f40-49e5-bba2-01599a22769f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.418814555Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b7a480df-c2a0-408a-8f62-dd9431b94efc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.420546135Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=58c6887c-b0c7-4eff-b873-b4f5e7c16d5e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.42189714Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6b9ba419-3d5e-487a-8468-75890c99582f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.422761051Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8de4c7bf-c80e-41eb-9a33-14c1fff856ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.424360027Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7157efaa-0bc0-4348-a5c6-374c01495c4a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.425327118Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2a907da-3366-4a83-862f-ce206ad44275 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:08.694009    5654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:08.694771    5654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:08.695706    5654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:08.697244    5654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:08.697536    5654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:49:08 up  6:31,  0 user,  load average: 0.60, 1.37, 1.83
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:49:06 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:06 no-preload-389831 kubelet[5469]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:06 no-preload-389831 kubelet[5469]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:06 no-preload-389831 kubelet[5469]: E1208 01:49:06.551254    5469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:06 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:06 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:07 no-preload-389831 kubelet[5502]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:07 no-preload-389831 kubelet[5502]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:07 no-preload-389831 kubelet[5502]: E1208 01:49:07.311430    5502 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:07 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:07 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:07 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:08 no-preload-389831 kubelet[5569]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:08 no-preload-389831 kubelet[5569]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:08 no-preload-389831 kubelet[5569]: E1208 01:49:08.052755    5569 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 6 (348.57106ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:09.170176 1044311 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (518.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.071748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:42:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-172173 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-172173 describe deploy/metrics-server -n kube-system: exit status 1 (76.822141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-172173 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-172173
helpers_test.go:243: (dbg) docker inspect embed-certs-172173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	        "Created": "2025-12-08T01:40:36.846301629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1022630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:40:36.917016566Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hosts",
	        "LogPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c-json.log",
	        "Name": "/embed-certs-172173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-172173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-172173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	                "LowerDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-172173",
	                "Source": "/var/lib/docker/volumes/embed-certs-172173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-172173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-172173",
	                "name.minikube.sigs.k8s.io": "embed-certs-172173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94d4914eb089f6b240c3351256ee6a007015f61e708fe571983ffe47842da2b1",
	            "SandboxKey": "/var/run/docker/netns/94d4914eb089",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-172173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:90:95:f0:57:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d59524493acc02b4052a1b21ae1c4be3dd0f7ef0214fbeda13b3fc44e2ef94",
	                    "EndpointID": "2dc21c6f189511918f96f8cf543e48b79f5b0893680abe9a91a4b1f5d8c6e15b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-172173",
	                        "5f1be8b9f8b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25: (1.206549822s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-000739 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ ssh     │ -p cilium-000739 sudo crio config                                                                                                                                                                                                             │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │                     │
	│ delete  │ -p cilium-000739                                                                                                                                                                                                                              │ cilium-000739            │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:36 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831        │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:40:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:40:31.676173 1021451 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:40:31.676338 1021451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:31.676349 1021451 out.go:374] Setting ErrFile to fd 2...
	I1208 01:40:31.676355 1021451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:40:31.676612 1021451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:40:31.677024 1021451 out.go:368] Setting JSON to false
	I1208 01:40:31.677866 1021451 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22964,"bootTime":1765135068,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:40:31.677933 1021451 start.go:143] virtualization:  
	I1208 01:40:31.680198 1021451 out.go:179] * [embed-certs-172173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:40:31.681670 1021451 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:40:31.681773 1021451 notify.go:221] Checking for updates...
	I1208 01:40:31.692835 1021451 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:40:31.694685 1021451 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:40:31.696514 1021451 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:40:31.698763 1021451 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:40:31.702099 1021451 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:40:31.703993 1021451 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:40:31.704096 1021451 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:40:31.763907 1021451 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:40:31.764032 1021451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:32.001205 1021451 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-08 01:40:31.978305958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:32.001331 1021451 docker.go:319] overlay module found
	I1208 01:40:32.003806 1021451 out.go:179] * Using the docker driver based on user configuration
	I1208 01:40:32.005517 1021451 start.go:309] selected driver: docker
	I1208 01:40:32.005540 1021451 start.go:927] validating driver "docker" against <nil>
	I1208 01:40:32.005555 1021451 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:40:32.006327 1021451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:40:32.159463 1021451 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-08 01:40:32.149036184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:40:32.159616 1021451 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:40:32.159833 1021451 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:40:32.161239 1021451 out.go:179] * Using Docker driver with root privileges
	I1208 01:40:32.162466 1021451 cni.go:84] Creating CNI manager for ""
	I1208 01:40:32.162544 1021451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:32.162556 1021451 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:40:32.162656 1021451 start.go:353] cluster config:
	{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:32.164143 1021451 out.go:179] * Starting "embed-certs-172173" primary control-plane node in "embed-certs-172173" cluster
	I1208 01:40:32.165236 1021451 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:40:32.166452 1021451 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:40:32.168734 1021451 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:32.168781 1021451 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:40:32.168815 1021451 cache.go:65] Caching tarball of preloaded images
	I1208 01:40:32.168816 1021451 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:40:32.168945 1021451 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:40:32.168993 1021451 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:40:32.169232 1021451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:40:32.169262 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json: {Name:mk13abd2c26aab00ff45acf82ea9cc3c055750ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:32.207029 1021451 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:40:32.207053 1021451 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:40:32.207068 1021451 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:40:32.207098 1021451 start.go:360] acquireMachinesLock for embed-certs-172173: {Name:mk1784cff2b700f98514e7f93e65851ad3664475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:40:32.207193 1021451 start.go:364] duration metric: took 79.435µs to acquireMachinesLock for "embed-certs-172173"
	I1208 01:40:32.207226 1021451 start.go:93] Provisioning new machine with config: &{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:40:32.207303 1021451 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:40:30.582905 1021094 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:40:30.583144 1021094 start.go:159] libmachine.API.Create for "no-preload-389831" (driver="docker")
	I1208 01:40:30.583175 1021094 client.go:173] LocalClient.Create starting
	I1208 01:40:30.583235 1021094 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:40:30.583268 1021094 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:30.583286 1021094 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:30.583344 1021094 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:40:30.583360 1021094 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:30.583372 1021094 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:30.583731 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:40:30.617511 1021094 cli_runner.go:211] docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:40:30.617597 1021094 network_create.go:284] running [docker network inspect no-preload-389831] to gather additional debugging logs...
	I1208 01:40:30.617616 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831
	W1208 01:40:30.640046 1021094 cli_runner.go:211] docker network inspect no-preload-389831 returned with exit code 1
	I1208 01:40:30.640078 1021094 network_create.go:287] error running [docker network inspect no-preload-389831]: docker network inspect no-preload-389831: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-389831 not found
	I1208 01:40:30.640091 1021094 network_create.go:289] output of [docker network inspect no-preload-389831]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-389831 not found
	
	** /stderr **
	I1208 01:40:30.640179 1021094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:30.659736 1021094 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:40:30.660060 1021094 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:40:30.660405 1021094 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:40:30.660790 1021094 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c3ba80}
	I1208 01:40:30.660815 1021094 network_create.go:124] attempt to create docker network no-preload-389831 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:40:30.660873 1021094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-389831 no-preload-389831
	I1208 01:40:30.785279 1021094 network_create.go:108] docker network no-preload-389831 192.168.76.0/24 created
	I1208 01:40:30.785310 1021094 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-389831" container
	I1208 01:40:30.785392 1021094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:40:30.805170 1021094 cli_runner.go:164] Run: docker volume create no-preload-389831 --label name.minikube.sigs.k8s.io=no-preload-389831 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:40:30.827543 1021094 oci.go:103] Successfully created a docker volume no-preload-389831
	I1208 01:40:30.827643 1021094 cli_runner.go:164] Run: docker run --rm --name no-preload-389831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --entrypoint /usr/bin/test -v no-preload-389831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:40:30.943056 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:30.943305 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:40:30.945628 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:30.953346 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:40:30.969327 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:40:31.004580 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:40:31.011116 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:31.105258 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:40:31.110837 1021094 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 554.555793ms
	I1208 01:40:31.110891 1021094 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:40:31.358672 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:40:31.358790 1021094 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 802.680779ms
	I1208 01:40:31.358876 1021094 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	W1208 01:40:31.847628 1021094 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:40:31.847676 1021094 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:40:31.955136 1021094 cli_runner.go:217] Completed: docker run --rm --name no-preload-389831-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --entrypoint /usr/bin/test -v no-preload-389831:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.127435438s)
	I1208 01:40:31.955160 1021094 oci.go:107] Successfully prepared a docker volume no-preload-389831
	I1208 01:40:31.955190 1021094 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1208 01:40:31.955323 1021094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:40:31.955427 1021094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:40:31.975281 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:40:31.975360 1021094 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.419412983s
	I1208 01:40:31.975388 1021094 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:40:32.056401 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:40:32.056492 1021094 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.499910533s
	I1208 01:40:32.056518 1021094 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:40:32.083362 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:40:32.083443 1021094 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.527905135s
	I1208 01:40:32.083470 1021094 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:40:32.097637 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:40:32.098131 1021094 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.54235797s
	I1208 01:40:32.098272 1021094 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:40:32.138269 1021094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-389831 --name no-preload-389831 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-389831 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-389831 --network no-preload-389831 --ip 192.168.76.2 --volume no-preload-389831:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:40:32.148070 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:40:32.148095 1021094 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 1.591676321s
	I1208 01:40:32.148108 1021094 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:40:32.372606 1021094 cache.go:157] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:40:32.372640 1021094 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.817550811s
	I1208 01:40:32.372654 1021094 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:40:32.372678 1021094 cache.go:87] Successfully saved all images to host disk.
	I1208 01:40:32.598398 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Running}}
	I1208 01:40:32.615462 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:32.648114 1021094 cli_runner.go:164] Run: docker exec no-preload-389831 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:40:32.755468 1021094 oci.go:144] the created container "no-preload-389831" has a running status.
	I1208 01:40:32.755495 1021094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa...
	I1208 01:40:34.036590 1021094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:40:34.057539 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:34.081456 1021094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:40:34.081478 1021094 kic_runner.go:114] Args: [docker exec --privileged no-preload-389831 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:40:34.137445 1021094 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:40:34.157285 1021094 machine.go:94] provisionDockerMachine start ...
	I1208 01:40:34.157383 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:34.178906 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:34.179255 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:34.179278 1021094 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:40:34.179934 1021094 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:40:32.213968 1021451 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:40:32.214664 1021451 start.go:159] libmachine.API.Create for "embed-certs-172173" (driver="docker")
	I1208 01:40:32.214699 1021451 client.go:173] LocalClient.Create starting
	I1208 01:40:32.214771 1021451 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:40:32.214810 1021451 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:32.214829 1021451 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:32.214911 1021451 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:40:32.214930 1021451 main.go:143] libmachine: Decoding PEM data...
	I1208 01:40:32.214941 1021451 main.go:143] libmachine: Parsing certificate...
	I1208 01:40:32.215310 1021451 cli_runner.go:164] Run: docker network inspect embed-certs-172173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:40:32.235061 1021451 cli_runner.go:211] docker network inspect embed-certs-172173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:40:32.235142 1021451 network_create.go:284] running [docker network inspect embed-certs-172173] to gather additional debugging logs...
	I1208 01:40:32.235159 1021451 cli_runner.go:164] Run: docker network inspect embed-certs-172173
	W1208 01:40:32.272154 1021451 cli_runner.go:211] docker network inspect embed-certs-172173 returned with exit code 1
	I1208 01:40:32.272191 1021451 network_create.go:287] error running [docker network inspect embed-certs-172173]: docker network inspect embed-certs-172173: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-172173 not found
	I1208 01:40:32.272205 1021451 network_create.go:289] output of [docker network inspect embed-certs-172173]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-172173 not found
	
	** /stderr **
	I1208 01:40:32.272336 1021451 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:32.298355 1021451 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:40:32.307203 1021451 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:40:32.307754 1021451 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:40:32.308045 1021451 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:40:32.308580 1021451 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2e0e0}
	I1208 01:40:32.309431 1021451 network_create.go:124] attempt to create docker network embed-certs-172173 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:40:32.309534 1021451 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-172173 embed-certs-172173
	I1208 01:40:32.423534 1021451 network_create.go:108] docker network embed-certs-172173 192.168.85.0/24 created
	I1208 01:40:32.423565 1021451 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-172173" container
	I1208 01:40:32.423636 1021451 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:40:32.440514 1021451 cli_runner.go:164] Run: docker volume create embed-certs-172173 --label name.minikube.sigs.k8s.io=embed-certs-172173 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:40:32.459482 1021451 oci.go:103] Successfully created a docker volume embed-certs-172173
	I1208 01:40:32.459573 1021451 cli_runner.go:164] Run: docker run --rm --name embed-certs-172173-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-172173 --entrypoint /usr/bin/test -v embed-certs-172173:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:40:33.139924 1021451 oci.go:107] Successfully prepared a docker volume embed-certs-172173
	I1208 01:40:33.139991 1021451 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:33.140001 1021451 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:40:33.140067 1021451 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-172173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:40:37.398648 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:40:37.398671 1021094 ubuntu.go:182] provisioning hostname "no-preload-389831"
	I1208 01:40:37.398735 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:37.441103 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:37.441409 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:37.441421 1021094 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-389831 && echo "no-preload-389831" | sudo tee /etc/hostname
	I1208 01:40:37.646779 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:40:37.647122 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:37.688959 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:37.689261 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:37.689277 1021094 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-389831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-389831/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-389831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:40:37.879741 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:40:37.879770 1021094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:40:37.879797 1021094 ubuntu.go:190] setting up certificates
	I1208 01:40:37.879807 1021094 provision.go:84] configureAuth start
	I1208 01:40:37.879866 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:37.908731 1021094 provision.go:143] copyHostCerts
	I1208 01:40:37.908802 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:40:37.908812 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:40:37.908887 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:40:37.908979 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:40:37.908984 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:40:37.909007 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:40:37.909057 1021094 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:40:37.909061 1021094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:40:37.909085 1021094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:40:37.909128 1021094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.no-preload-389831 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-389831]
	I1208 01:40:38.740328 1021094 provision.go:177] copyRemoteCerts
	I1208 01:40:38.740403 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:40:38.740448 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:38.756928 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:38.862526 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:40:38.879576 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:40:38.896985 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:40:38.915461 1021094 provision.go:87] duration metric: took 1.035629128s to configureAuth
	I1208 01:40:38.915542 1021094 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:40:38.915744 1021094 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:40:38.915859 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:38.933685 1021094 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:38.934003 1021094 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33782 <nil> <nil>}
	I1208 01:40:38.934026 1021094 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:40:39.318219 1021094 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:40:39.318243 1021094 machine.go:97] duration metric: took 5.160935414s to provisionDockerMachine
	I1208 01:40:39.318254 1021094 client.go:176] duration metric: took 8.735073014s to LocalClient.Create
	I1208 01:40:39.318271 1021094 start.go:167] duration metric: took 8.735127964s to libmachine.API.Create "no-preload-389831"
	I1208 01:40:39.318278 1021094 start.go:293] postStartSetup for "no-preload-389831" (driver="docker")
	I1208 01:40:39.318290 1021094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:40:39.318354 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:40:39.318404 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.340613 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.446809 1021094 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:40:39.450174 1021094 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:40:39.450249 1021094 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:40:39.450268 1021094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:40:39.450331 1021094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:40:39.450414 1021094 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:40:39.450525 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:40:39.457932 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:39.475530 1021094 start.go:296] duration metric: took 157.236775ms for postStartSetup
	I1208 01:40:39.475890 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:39.492406 1021094 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:40:39.492698 1021094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:40:39.492748 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.509119 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.611848 1021094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:40:39.616363 1021094 start.go:128] duration metric: took 9.03788352s to createHost
	I1208 01:40:39.616392 1021094 start.go:83] releasing machines lock for "no-preload-389831", held for 9.03800294s
	I1208 01:40:39.616474 1021094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:40:39.637241 1021094 ssh_runner.go:195] Run: cat /version.json
	I1208 01:40:39.637302 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.637554 1021094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:40:39.637619 1021094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:40:39.658434 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.664626 1021094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33782 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:40:39.762364 1021094 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:39.858979 1021094 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:40:39.891167 1021094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:40:39.895381 1021094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:40:39.895456 1021094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:40:39.923048 1021094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:40:39.923069 1021094 start.go:496] detecting cgroup driver to use...
	I1208 01:40:39.923103 1021094 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:40:39.923181 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:40:39.940954 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:40:39.953362 1021094 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:40:39.953456 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:40:39.970866 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:40:39.987984 1021094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:40:40.115128 1021094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:40:40.240743 1021094 docker.go:234] disabling docker service ...
	I1208 01:40:40.240809 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:40:40.261355 1021094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:40:40.274583 1021094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:40:40.395150 1021094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:40:40.515923 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:40:40.528632 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:40:40.542509 1021094 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:40:40.542613 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.551518 1021094 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:40:40.551618 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.561009 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.569512 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.578268 1021094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:40:40.586416 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.595265 1021094 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.608588 1021094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:40.617344 1021094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:40:40.624856 1021094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:40:40.632091 1021094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:40.748541 1021094 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:40:40.922344 1021094 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:40:40.922413 1021094 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:40:40.926490 1021094 start.go:564] Will wait 60s for crictl version
	I1208 01:40:40.926588 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:40.931188 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:40:40.958216 1021094 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:40:40.958332 1021094 ssh_runner.go:195] Run: crio --version
	I1208 01:40:40.988402 1021094 ssh_runner.go:195] Run: crio --version
	I1208 01:40:41.044202 1021094 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:40:36.771558 1021451 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-172173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.631447377s)
	I1208 01:40:36.771590 1021451 kic.go:203] duration metric: took 3.631585856s to extract preloaded images to volume ...
	W1208 01:40:36.771736 1021451 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:40:36.771840 1021451 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:40:36.831287 1021451 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-172173 --name embed-certs-172173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-172173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-172173 --network embed-certs-172173 --ip 192.168.85.2 --volume embed-certs-172173:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:40:37.156504 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Running}}
	I1208 01:40:37.188425 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:40:37.227168 1021451 cli_runner.go:164] Run: docker exec embed-certs-172173 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:40:37.305834 1021451 oci.go:144] the created container "embed-certs-172173" has a running status.
	I1208 01:40:37.305866 1021451 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa...
	I1208 01:40:37.635173 1021451 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:40:37.681114 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:40:37.711841 1021451 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:40:37.711859 1021451 kic_runner.go:114] Args: [docker exec --privileged embed-certs-172173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:40:37.788075 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:40:37.818864 1021451 machine.go:94] provisionDockerMachine start ...
	I1208 01:40:37.818978 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:37.851495 1021451 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:37.851853 1021451 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1208 01:40:37.851863 1021451 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:40:37.852551 1021451 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:40:41.015057 1021451 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:40:41.015146 1021451 ubuntu.go:182] provisioning hostname "embed-certs-172173"
	I1208 01:40:41.015259 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:41.039812 1021451 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:41.040131 1021451 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1208 01:40:41.040142 1021451 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-172173 && echo "embed-certs-172173" | sudo tee /etc/hostname
	I1208 01:40:41.208921 1021451 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:40:41.209008 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:41.229121 1021451 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:41.229421 1021451 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1208 01:40:41.229436 1021451 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-172173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-172173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-172173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:40:41.384099 1021451 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:40:41.384135 1021451 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:40:41.384161 1021451 ubuntu.go:190] setting up certificates
	I1208 01:40:41.384171 1021451 provision.go:84] configureAuth start
	I1208 01:40:41.384236 1021451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:40:41.407394 1021451 provision.go:143] copyHostCerts
	I1208 01:40:41.407463 1021451 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:40:41.407472 1021451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:40:41.407545 1021451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:40:41.407630 1021451 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:40:41.407635 1021451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:40:41.407660 1021451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:40:41.407708 1021451 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:40:41.407713 1021451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:40:41.407736 1021451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:40:41.407780 1021451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.embed-certs-172173 san=[127.0.0.1 192.168.85.2 embed-certs-172173 localhost minikube]
	I1208 01:40:41.046973 1021094 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:41.069224 1021094 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:40:41.073380 1021094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:41.086258 1021094 kubeadm.go:884] updating cluster {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:40:41.086377 1021094 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:40:41.086429 1021094 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:41.121676 1021094 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:40:41.121706 1021094 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1208 01:40:41.121762 1021094 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:41.121782 1021094 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.121959 1021094 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.121964 1021094 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.122042 1021094 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.122057 1021094 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.122121 1021094 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.122138 1021094 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.123467 1021094 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.123716 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.123841 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.123954 1021094 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.124078 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.124198 1021094 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.124316 1021094 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:41.124447 1021094 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.358730 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.378407 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.397963 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.411580 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.418810 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.459685 1021094 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1208 01:40:41.459721 1021094 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.459767 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.463120 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1208 01:40:41.467088 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.487448 1021094 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1208 01:40:41.487487 1021094 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.487544 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.535156 1021094 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1208 01:40:41.535196 1021094 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.535244 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.629655 1021094 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1208 01:40:41.629693 1021094 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.629745 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.644365 1021094 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1208 01:40:41.644408 1021094 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.644461 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.644529 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.653325 1021094 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1208 01:40:41.653370 1021094 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1208 01:40:41.653416 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.670358 1021094 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1208 01:40:41.670395 1021094 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.670443 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:41.670509 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.670562 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.670625 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.733494 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:41.733576 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.733643 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:41.804067 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:41.804138 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:41.804193 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:41.804248 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:41.897631 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:41.897714 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:40:42.016738 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:40:42.016995 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:42.017090 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:40:42.017170 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:42.017251 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:40:42.090016 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:40:42.096041 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:40:42.096158 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:42.239473 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:40:42.239607 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:42.239709 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:40:42.239768 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:42.239817 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:42.239880 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:40:42.239924 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:42.239975 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:42.283121 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:40:42.283225 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:42.283286 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1208 01:40:42.283299 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1208 01:40:42.384700 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.384789 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1208 01:40:42.384891 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1208 01:40:42.384923 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1208 01:40:42.385024 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:40:42.385134 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.385214 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.385264 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1208 01:40:42.385354 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:42.385441 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:42.385537 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.385585 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	W1208 01:40:42.408987 1021094 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:40:42.409159 1021094 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:42.448948 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1208 01:40:42.448991 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1208 01:40:42.449028 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1208 01:40:42.449041 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1208 01:40:42.655244 1021094 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1208 01:40:42.655295 1021094 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:42.655346 1021094 ssh_runner.go:195] Run: which crictl
	I1208 01:40:42.702200 1021094 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.702267 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1208 01:40:42.769661 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:43.233664 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:43.233763 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1208 01:40:43.233839 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:43.233907 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:40:45.289146 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.055202589s)
	I1208 01:40:45.289180 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1208 01:40:45.289198 1021094 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:45.289255 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:40:45.289318 1021094 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055593018s)
	I1208 01:40:45.289362 1021094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:40:42.647279 1021451 provision.go:177] copyRemoteCerts
	I1208 01:40:42.647392 1021451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:40:42.647467 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:42.746755 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:40:42.887758 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:40:42.917182 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 01:40:42.955614 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:40:43.004848 1021451 provision.go:87] duration metric: took 1.620662339s to configureAuth
	I1208 01:40:43.004881 1021451 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:40:43.005093 1021451 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:40:43.005210 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:43.030920 1021451 main.go:143] libmachine: Using SSH client type: native
	I1208 01:40:43.031230 1021451 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1208 01:40:43.031248 1021451 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:40:43.432512 1021451 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:40:43.432541 1021451 machine.go:97] duration metric: took 5.613649676s to provisionDockerMachine
	I1208 01:40:43.432552 1021451 client.go:176] duration metric: took 11.217846067s to LocalClient.Create
	I1208 01:40:43.432567 1021451 start.go:167] duration metric: took 11.217905392s to libmachine.API.Create "embed-certs-172173"
	I1208 01:40:43.432574 1021451 start.go:293] postStartSetup for "embed-certs-172173" (driver="docker")
	I1208 01:40:43.432585 1021451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:40:43.432665 1021451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:40:43.432719 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:43.464731 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:40:43.579709 1021451 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:40:43.583656 1021451 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:40:43.583736 1021451 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:40:43.583763 1021451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:40:43.583850 1021451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:40:43.583984 1021451 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:40:43.584141 1021451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:40:43.592055 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:43.611736 1021451 start.go:296] duration metric: took 179.138505ms for postStartSetup
	I1208 01:40:43.612203 1021451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:40:43.631408 1021451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:40:43.631675 1021451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:40:43.631718 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:43.657163 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:40:43.765271 1021451 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:40:43.771042 1021451 start.go:128] duration metric: took 11.563721828s to createHost
	I1208 01:40:43.771065 1021451 start.go:83] releasing machines lock for "embed-certs-172173", held for 11.563856526s
	I1208 01:40:43.771134 1021451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:40:43.789265 1021451 ssh_runner.go:195] Run: cat /version.json
	I1208 01:40:43.789322 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:43.789584 1021451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:40:43.789665 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:40:43.821164 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:40:43.837163 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:40:43.946925 1021451 ssh_runner.go:195] Run: systemctl --version
	I1208 01:40:44.042066 1021451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:40:44.098626 1021451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:40:44.103540 1021451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:40:44.103626 1021451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:40:44.152354 1021451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:40:44.152382 1021451 start.go:496] detecting cgroup driver to use...
	I1208 01:40:44.152414 1021451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:40:44.152485 1021451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:40:44.177462 1021451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:40:44.193236 1021451 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:40:44.193317 1021451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:40:44.213986 1021451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:40:44.239321 1021451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:40:44.453546 1021451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:40:44.645920 1021451 docker.go:234] disabling docker service ...
	I1208 01:40:44.645998 1021451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:40:44.668128 1021451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:40:44.682385 1021451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:40:44.829528 1021451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:40:44.971368 1021451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:40:44.985572 1021451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:40:45.001229 1021451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:40:45.001391 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.015736 1021451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:40:45.015901 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.043036 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.058339 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.075706 1021451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:40:45.086884 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.100061 1021451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.118972 1021451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:40:45.133651 1021451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:40:45.144435 1021451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:40:45.155018 1021451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:45.352196 1021451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:40:45.871668 1021451 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:40:45.871749 1021451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:40:45.880312 1021451 start.go:564] Will wait 60s for crictl version
	I1208 01:40:45.880398 1021451 ssh_runner.go:195] Run: which crictl
	I1208 01:40:45.884808 1021451 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:40:45.922709 1021451 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:40:45.922821 1021451 ssh_runner.go:195] Run: crio --version
	I1208 01:40:45.992882 1021451 ssh_runner.go:195] Run: crio --version
	I1208 01:40:46.035012 1021451 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:40:46.037793 1021451 cli_runner.go:164] Run: docker network inspect embed-certs-172173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:40:46.062946 1021451 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:40:46.066737 1021451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:46.080252 1021451 kubeadm.go:884] updating cluster {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:40:46.080390 1021451 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:40:46.080461 1021451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:46.158535 1021451 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:46.158561 1021451 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:40:46.158628 1021451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:40:46.202658 1021451 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:40:46.202680 1021451 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:40:46.202688 1021451 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:40:46.202775 1021451 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-172173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:40:46.202879 1021451 ssh_runner.go:195] Run: crio config
	I1208 01:40:46.309768 1021451 cni.go:84] Creating CNI manager for ""
	I1208 01:40:46.309793 1021451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:46.309814 1021451 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:40:46.309839 1021451 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-172173 NodeName:embed-certs-172173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:40:46.309979 1021451 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-172173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:40:46.310055 1021451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:40:46.318371 1021451 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:40:46.318447 1021451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:40:46.326264 1021451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1208 01:40:46.340424 1021451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:40:46.354007 1021451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1208 01:40:46.368015 1021451 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:40:46.372228 1021451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:46.381930 1021451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:46.560159 1021451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:40:46.576130 1021451 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173 for IP: 192.168.85.2
	I1208 01:40:46.576152 1021451 certs.go:195] generating shared ca certs ...
	I1208 01:40:46.576169 1021451 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:46.576306 1021451 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:40:46.576353 1021451 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:40:46.576364 1021451 certs.go:257] generating profile certs ...
	I1208 01:40:46.576418 1021451 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.key
	I1208 01:40:46.576433 1021451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.crt with IP's: []
	I1208 01:40:47.879635 1021094 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.590245918s)
	I1208 01:40:47.879676 1021094 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:40:47.879768 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:47.879905 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.590635429s)
	I1208 01:40:47.879913 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1208 01:40:47.879931 1021094 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:47.879959 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:40:49.505230 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.625251106s)
	I1208 01:40:49.505256 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1208 01:40:49.505274 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:49.505322 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:40:49.505384 1021094 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.625606901s)
	I1208 01:40:49.505402 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1208 01:40:49.505417 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1208 01:40:46.689545 1021451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.crt ...
	I1208 01:40:46.689618 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.crt: {Name:mkde6dec56d1115a15dbea1fcacaeaa4e473dea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:46.689860 1021451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.key ...
	I1208 01:40:46.689900 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.key: {Name:mk8de2b3e8b8ae90aa9ba1282c3761c387f6da1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:46.690065 1021451 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7
	I1208 01:40:46.690108 1021451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt.d90ebbe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:40:46.937129 1021451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt.d90ebbe7 ...
	I1208 01:40:46.937205 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt.d90ebbe7: {Name:mkdf4d84445040931379c217c7af6ebc6dedd8f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:46.937397 1021451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7 ...
	I1208 01:40:46.937438 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7: {Name:mk0d045ef5cb139b95e598dea447c698ca6f7cf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:46.937551 1021451 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt.d90ebbe7 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt
	I1208 01:40:46.937658 1021451 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key
	I1208 01:40:46.937757 1021451 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key
	I1208 01:40:46.937805 1021451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt with IP's: []
	I1208 01:40:47.189255 1021451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt ...
	I1208 01:40:47.189285 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt: {Name:mkfcf7abeb47b76b9b1f5e82564c3bff374cdf32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:47.189458 1021451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key ...
	I1208 01:40:47.189474 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key: {Name:mke2c277c8aed5d0b66f16c33f52e0a496e57948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:47.189691 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:40:47.189736 1021451 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:40:47.189751 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:40:47.189779 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:40:47.189810 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:40:47.189838 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:40:47.189887 1021451 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:47.190450 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:40:47.209235 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:40:47.240804 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:40:47.261969 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:40:47.281045 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1208 01:40:47.300207 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:40:47.320147 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:40:47.338961 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:40:47.357282 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:40:47.376175 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:40:47.394601 1021451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:40:47.414167 1021451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:40:47.428431 1021451 ssh_runner.go:195] Run: openssl version
	I1208 01:40:47.435765 1021451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:40:47.443413 1021451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:40:47.451103 1021451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:40:47.455674 1021451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:40:47.455749 1021451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:40:47.511367 1021451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:47.522017 1021451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:47.530670 1021451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:47.543193 1021451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:40:47.556474 1021451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:47.561493 1021451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:47.561565 1021451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:47.627998 1021451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:40:47.635968 1021451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:40:47.643546 1021451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:40:47.651020 1021451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:40:47.658834 1021451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:40:47.663476 1021451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:40:47.663546 1021451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:40:47.706135 1021451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:40:47.714355 1021451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:40:47.722475 1021451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:40:47.727482 1021451 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:40:47.727538 1021451 kubeadm.go:401] StartCluster: {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:47.727616 1021451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:40:47.727684 1021451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:40:47.763341 1021451 cri.go:89] found id: ""
	I1208 01:40:47.763464 1021451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:40:47.775805 1021451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:40:47.784711 1021451 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:40:47.784819 1021451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:40:47.794546 1021451 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:40:47.794567 1021451 kubeadm.go:158] found existing configuration files:
	
	I1208 01:40:47.794624 1021451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:40:47.803072 1021451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:40:47.803168 1021451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:40:47.812114 1021451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:40:47.821469 1021451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:40:47.821611 1021451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:40:47.830187 1021451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:40:47.838946 1021451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:40:47.839006 1021451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:40:47.847318 1021451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:40:47.855155 1021451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:40:47.855261 1021451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:40:47.863422 1021451 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:40:47.965967 1021451 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 01:40:47.966236 1021451 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:40:48.053849 1021451 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:40:51.515481 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (2.010138753s)
	I1208 01:40:51.515506 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1208 01:40:51.515534 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:51.515595 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:40:53.283994 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.768373302s)
	I1208 01:40:53.284022 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1208 01:40:53.284040 1021094 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:53.284087 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:40:54.776767 1021094 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.492643815s)
	I1208 01:40:54.776790 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1208 01:40:54.776809 1021094 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:54.776887 1021094 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:40:55.542261 1021094 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1208 01:40:55.542302 1021094 cache_images.go:125] Successfully loaded all cached images
	I1208 01:40:55.542334 1021094 cache_images.go:94] duration metric: took 14.42058748s to LoadCachedImages
	I1208 01:40:55.542353 1021094 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:40:55.542494 1021094 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-389831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:40:55.542618 1021094 ssh_runner.go:195] Run: crio config
	I1208 01:40:55.608304 1021094 cni.go:84] Creating CNI manager for ""
	I1208 01:40:55.608330 1021094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:40:55.608371 1021094 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:40:55.608409 1021094 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-389831 NodeName:no-preload-389831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:40:55.608604 1021094 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-389831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:40:55.608716 1021094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:40:55.618315 1021094 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1208 01:40:55.618427 1021094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:40:55.626985 1021094 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1208 01:40:55.627359 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1208 01:40:55.627952 1021094 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1208 01:40:55.628367 1021094 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1208 01:40:55.633443 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1208 01:40:55.633488 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1208 01:40:56.620488 1021094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:40:56.639267 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1208 01:40:56.644242 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1208 01:40:56.644297 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1208 01:40:56.732411 1021094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1208 01:40:56.742026 1021094 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1208 01:40:56.742074 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1208 01:40:57.367860 1021094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:40:57.386337 1021094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:40:57.406473 1021094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:40:57.421029 1021094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:40:57.434825 1021094 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:40:57.439223 1021094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:40:57.449248 1021094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:40:57.640067 1021094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:40:57.685241 1021094 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831 for IP: 192.168.76.2
	I1208 01:40:57.685310 1021094 certs.go:195] generating shared ca certs ...
	I1208 01:40:57.685341 1021094 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.685534 1021094 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:40:57.685603 1021094 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:40:57.685624 1021094 certs.go:257] generating profile certs ...
	I1208 01:40:57.685707 1021094 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key
	I1208 01:40:57.685741 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt with IP's: []
	I1208 01:40:57.798040 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt ...
	I1208 01:40:57.798074 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt: {Name:mk43ee9cb64d4d36ddab24e767a95ef0e5d2d3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.798305 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key ...
	I1208 01:40:57.798320 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key: {Name:mk7d00067baa29a2737ac83ba8ddb47ef30348a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.798425 1021094 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e
	I1208 01:40:57.798444 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:40:57.927141 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e ...
	I1208 01:40:57.927177 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e: {Name:mk4240accc36220fe97de733b0df0bcfda683f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.927358 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e ...
	I1208 01:40:57.927376 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e: {Name:mk4ddd25b3ece0ab46286ccbd524a494c1408f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:57.927455 1021094 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt.2f54046e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt
	I1208 01:40:57.927535 1021094 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key
	I1208 01:40:57.927600 1021094 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key
	I1208 01:40:57.927621 1021094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt with IP's: []
	I1208 01:40:58.305332 1021094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt ...
	I1208 01:40:58.305364 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt: {Name:mkf0fe7312d071deb211429779cb97fae64ec03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:58.305547 1021094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key ...
	I1208 01:40:58.305567 1021094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key: {Name:mk723b5789ca1fc903f77c35a669827cb9f89c85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:40:58.305758 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:40:58.305812 1021094 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:40:58.305827 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:40:58.305855 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:40:58.305886 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:40:58.305917 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:40:58.305971 1021094 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:40:58.306535 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:40:58.337508 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:40:58.365804 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:40:58.401533 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:40:58.435305 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:40:58.468794 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:40:58.505545 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:40:58.558346 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:40:58.600280 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:40:58.641629 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:40:58.680780 1021094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:40:58.705473 1021094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:40:58.737548 1021094 ssh_runner.go:195] Run: openssl version
	I1208 01:40:58.751909 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.764191 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:40:58.778005 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.786094 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.786162 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:40:58.859688 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:40:58.870971 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:40:58.880979 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.896850 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:40:58.904951 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.909107 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.909180 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:40:58.956042 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:58.963909 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:40:58.971511 1021094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.979389 1021094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:40:58.987098 1021094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.991291 1021094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:58.991411 1021094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:40:59.034640 1021094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:40:59.042908 1021094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:40:59.050588 1021094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:40:59.054740 1021094 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:40:59.054795 1021094 kubeadm.go:401] StartCluster: {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:40:59.054883 1021094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:40:59.054946 1021094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:40:59.113395 1021094 cri.go:89] found id: ""
	I1208 01:40:59.113462 1021094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:40:59.122084 1021094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:40:59.130170 1021094 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:40:59.130233 1021094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:40:59.145103 1021094 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:40:59.145130 1021094 kubeadm.go:158] found existing configuration files:
	
	I1208 01:40:59.145190 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:40:59.153776 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:40:59.153841 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:40:59.161501 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:40:59.174279 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:40:59.174390 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:40:59.189242 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:40:59.199847 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:40:59.199960 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:40:59.214078 1021094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:40:59.227057 1021094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:40:59.227167 1021094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:40:59.236769 1021094 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:40:59.332768 1021094 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:40:59.333181 1021094 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:40:59.479226 1021094 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:40:59.479352 1021094 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:40:59.479415 1021094 kubeadm.go:319] OS: Linux
	I1208 01:40:59.479478 1021094 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:40:59.479553 1021094 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:40:59.479616 1021094 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:40:59.479686 1021094 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:40:59.479769 1021094 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:40:59.479854 1021094 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:40:59.479966 1021094 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:40:59.480053 1021094 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:40:59.480144 1021094 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:40:59.575913 1021094 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:40:59.576029 1021094 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:40:59.576124 1021094 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:40:59.619232 1021094 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:40:59.626183 1021094 out.go:252]   - Generating certificates and keys ...
	I1208 01:40:59.626288 1021094 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:40:59.626359 1021094 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:40:59.794659 1021094 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:40:59.983167 1021094 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:41:00.099229 1021094 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:41:00.511838 1021094 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:41:01.492176 1021094 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:41:01.492685 1021094 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:41:01.626325 1021094 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:41:01.626939 1021094 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-389831] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:41:01.840581 1021094 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:41:02.091995 1021094 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:41:02.170533 1021094 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:41:02.171147 1021094 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:41:02.777854 1021094 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:41:03.544958 1021094 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:41:03.918083 1021094 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:41:04.119224 1021094 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:41:04.441753 1021094 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:41:04.442881 1021094 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:41:04.445832 1021094 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:41:04.449779 1021094 out.go:252]   - Booting up control plane ...
	I1208 01:41:04.449901 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:41:04.449985 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:41:04.454509 1021094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:41:04.478904 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:41:04.479017 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:41:04.490174 1021094 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:41:04.490806 1021094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:41:04.491291 1021094 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:41:04.692853 1021094 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:41:04.692979 1021094 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:41:07.440753 1021451 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 01:41:07.440824 1021451 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:41:07.440915 1021451 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:41:07.440980 1021451 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:41:07.441017 1021451 kubeadm.go:319] OS: Linux
	I1208 01:41:07.441065 1021451 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:41:07.441133 1021451 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:41:07.441204 1021451 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:41:07.441257 1021451 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:41:07.441317 1021451 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:41:07.441385 1021451 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:41:07.441435 1021451 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:41:07.441495 1021451 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:41:07.441545 1021451 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:41:07.441617 1021451 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:41:07.441713 1021451 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:41:07.441842 1021451 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:41:07.441927 1021451 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:41:07.446673 1021451 out.go:252]   - Generating certificates and keys ...
	I1208 01:41:07.446781 1021451 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:41:07.446883 1021451 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:41:07.446979 1021451 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:41:07.447058 1021451 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:41:07.447144 1021451 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:41:07.447200 1021451 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:41:07.447264 1021451 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:41:07.447389 1021451 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-172173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:41:07.447468 1021451 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:41:07.447600 1021451 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-172173 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:41:07.447678 1021451 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:41:07.447774 1021451 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:41:07.447831 1021451 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:41:07.447889 1021451 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:41:07.447946 1021451 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:41:07.448017 1021451 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:41:07.448093 1021451 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:41:07.448175 1021451 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:41:07.448243 1021451 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:41:07.448362 1021451 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:41:07.448449 1021451 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:41:07.451511 1021451 out.go:252]   - Booting up control plane ...
	I1208 01:41:07.451631 1021451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:41:07.451732 1021451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:41:07.451801 1021451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:41:07.451935 1021451 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:41:07.452045 1021451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:41:07.452174 1021451 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:41:07.452285 1021451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:41:07.452328 1021451 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:41:07.452461 1021451 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:41:07.452569 1021451 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:41:07.452631 1021451 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.514842159s
	I1208 01:41:07.452740 1021451 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 01:41:07.452825 1021451 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1208 01:41:07.452917 1021451 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 01:41:07.452998 1021451 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 01:41:07.453076 1021451 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.190762082s
	I1208 01:41:07.453145 1021451 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.08679163s
	I1208 01:41:07.453215 1021451 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.501888051s
	I1208 01:41:07.453323 1021451 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 01:41:07.453459 1021451 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 01:41:07.453531 1021451 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 01:41:07.453719 1021451 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-172173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 01:41:07.453778 1021451 kubeadm.go:319] [bootstrap-token] Using token: 3c96nt.fw9cyeaysqgp7m5c
	I1208 01:41:07.456764 1021451 out.go:252]   - Configuring RBAC rules ...
	I1208 01:41:07.456890 1021451 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 01:41:07.456977 1021451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 01:41:07.457147 1021451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 01:41:07.457305 1021451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 01:41:07.457448 1021451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 01:41:07.457546 1021451 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 01:41:07.457676 1021451 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 01:41:07.457724 1021451 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 01:41:07.457776 1021451 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 01:41:07.457783 1021451 kubeadm.go:319] 
	I1208 01:41:07.457848 1021451 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 01:41:07.457855 1021451 kubeadm.go:319] 
	I1208 01:41:07.457938 1021451 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 01:41:07.457945 1021451 kubeadm.go:319] 
	I1208 01:41:07.457972 1021451 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 01:41:07.458040 1021451 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 01:41:07.458100 1021451 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 01:41:07.458108 1021451 kubeadm.go:319] 
	I1208 01:41:07.458167 1021451 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 01:41:07.458174 1021451 kubeadm.go:319] 
	I1208 01:41:07.458225 1021451 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 01:41:07.458232 1021451 kubeadm.go:319] 
	I1208 01:41:07.458288 1021451 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 01:41:07.458372 1021451 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 01:41:07.458448 1021451 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 01:41:07.458455 1021451 kubeadm.go:319] 
	I1208 01:41:07.458549 1021451 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 01:41:07.458642 1021451 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 01:41:07.458649 1021451 kubeadm.go:319] 
	I1208 01:41:07.458740 1021451 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3c96nt.fw9cyeaysqgp7m5c \
	I1208 01:41:07.458978 1021451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 01:41:07.459007 1021451 kubeadm.go:319] 	--control-plane 
	I1208 01:41:07.459011 1021451 kubeadm.go:319] 
	I1208 01:41:07.459103 1021451 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 01:41:07.459122 1021451 kubeadm.go:319] 
	I1208 01:41:07.459211 1021451 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3c96nt.fw9cyeaysqgp7m5c \
	I1208 01:41:07.459339 1021451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 01:41:07.459351 1021451 cni.go:84] Creating CNI manager for ""
	I1208 01:41:07.459358 1021451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:41:07.462421 1021451 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 01:41:07.465309 1021451 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 01:41:07.469318 1021451 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 01:41:07.469340 1021451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 01:41:07.483288 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 01:41:07.781337 1021451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 01:41:07.781463 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:07.781530 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-172173 minikube.k8s.io/updated_at=2025_12_08T01_41_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=embed-certs-172173 minikube.k8s.io/primary=true
	I1208 01:41:07.965198 1021451 ops.go:34] apiserver oom_adj: -16
	I1208 01:41:07.965341 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:08.466298 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:08.965492 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:09.465612 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:09.965395 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:10.466055 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:10.965434 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:11.466018 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:11.966386 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:12.466367 1021451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:41:12.578368 1021451 kubeadm.go:1114] duration metric: took 4.796946079s to wait for elevateKubeSystemPrivileges
	I1208 01:41:12.578395 1021451 kubeadm.go:403] duration metric: took 24.850861559s to StartCluster
	I1208 01:41:12.578412 1021451 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:41:12.578474 1021451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:41:12.579506 1021451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:41:12.579717 1021451 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:41:12.579844 1021451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 01:41:12.580099 1021451 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:41:12.580134 1021451 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:41:12.580194 1021451 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-172173"
	I1208 01:41:12.580208 1021451 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-172173"
	I1208 01:41:12.580229 1021451 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:41:12.580888 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:41:12.581290 1021451 addons.go:70] Setting default-storageclass=true in profile "embed-certs-172173"
	I1208 01:41:12.581314 1021451 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-172173"
	I1208 01:41:12.581616 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:41:12.584522 1021451 out.go:179] * Verifying Kubernetes components...
	I1208 01:41:12.591364 1021451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:41:12.619332 1021451 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:41:12.621260 1021451 addons.go:239] Setting addon default-storageclass=true in "embed-certs-172173"
	I1208 01:41:12.621305 1021451 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:41:12.621753 1021451 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:41:12.626398 1021451 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:41:12.626424 1021451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:41:12.626493 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:41:12.668448 1021451 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:41:12.668469 1021451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:41:12.668547 1021451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:41:12.693981 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:41:12.712267 1021451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:41:12.827782 1021451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 01:41:12.904503 1021451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:41:12.963939 1021451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:41:12.999727 1021451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:41:13.249665 1021451 node_ready.go:35] waiting up to 6m0s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:41:13.251269 1021451 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1208 01:41:13.594056 1021451 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1208 01:41:13.598449 1021451 addons.go:530] duration metric: took 1.018301592s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1208 01:41:13.756296 1021451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-172173" context rescaled to 1 replicas
	W1208 01:41:15.252591 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:17.252921 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:19.253227 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:21.753858 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:24.255705 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:26.753197 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:28.753432 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:31.252392 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:33.260128 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:35.752436 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:37.752491 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:39.752535 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:41.752819 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:43.755014 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:46.252702 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:48.253886 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:50.753260 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	W1208 01:41:53.253575 1021451 node_ready.go:57] node "embed-certs-172173" has "Ready":"False" status (will retry)
	I1208 01:41:54.757465 1021451 node_ready.go:49] node "embed-certs-172173" is "Ready"
	I1208 01:41:54.757494 1021451 node_ready.go:38] duration metric: took 41.507792491s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:41:54.757509 1021451 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:41:54.757570 1021451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:41:54.774921 1021451 api_server.go:72] duration metric: took 42.195175974s to wait for apiserver process to appear ...
	I1208 01:41:54.774944 1021451 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:41:54.774963 1021451 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:41:54.783442 1021451 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:41:54.787007 1021451 api_server.go:141] control plane version: v1.34.2
	I1208 01:41:54.787033 1021451 api_server.go:131] duration metric: took 12.082746ms to wait for apiserver health ...
	I1208 01:41:54.787044 1021451 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:41:54.799588 1021451 system_pods.go:59] 8 kube-system pods found
	I1208 01:41:54.799696 1021451 system_pods.go:61] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:41:54.799719 1021451 system_pods.go:61] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running
	I1208 01:41:54.799751 1021451 system_pods.go:61] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:41:54.799774 1021451 system_pods.go:61] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running
	I1208 01:41:54.799793 1021451 system_pods.go:61] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running
	I1208 01:41:54.799818 1021451 system_pods.go:61] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:41:54.799833 1021451 system_pods.go:61] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running
	I1208 01:41:54.799860 1021451 system_pods.go:61] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:41:54.799884 1021451 system_pods.go:74] duration metric: took 12.833688ms to wait for pod list to return data ...
	I1208 01:41:54.799905 1021451 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:41:54.805692 1021451 default_sa.go:45] found service account: "default"
	I1208 01:41:54.805715 1021451 default_sa.go:55] duration metric: took 5.792837ms for default service account to be created ...
	I1208 01:41:54.805725 1021451 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:41:54.815937 1021451 system_pods.go:86] 8 kube-system pods found
	I1208 01:41:54.815995 1021451 system_pods.go:89] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:41:54.816005 1021451 system_pods.go:89] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running
	I1208 01:41:54.816013 1021451 system_pods.go:89] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:41:54.816017 1021451 system_pods.go:89] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running
	I1208 01:41:54.816022 1021451 system_pods.go:89] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running
	I1208 01:41:54.816026 1021451 system_pods.go:89] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:41:54.816030 1021451 system_pods.go:89] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running
	I1208 01:41:54.816035 1021451 system_pods.go:89] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:41:54.816055 1021451 retry.go:31] will retry after 303.856827ms: missing components: kube-dns
	I1208 01:41:55.131310 1021451 system_pods.go:86] 8 kube-system pods found
	I1208 01:41:55.131399 1021451 system_pods.go:89] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Running
	I1208 01:41:55.131422 1021451 system_pods.go:89] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running
	I1208 01:41:55.131442 1021451 system_pods.go:89] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:41:55.131476 1021451 system_pods.go:89] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running
	I1208 01:41:55.131501 1021451 system_pods.go:89] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running
	I1208 01:41:55.131517 1021451 system_pods.go:89] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:41:55.131535 1021451 system_pods.go:89] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running
	I1208 01:41:55.131551 1021451 system_pods.go:89] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Running
	I1208 01:41:55.131584 1021451 system_pods.go:126] duration metric: took 325.851425ms to wait for k8s-apps to be running ...
	I1208 01:41:55.131616 1021451 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:41:55.131704 1021451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:41:55.146539 1021451 system_svc.go:56] duration metric: took 14.921129ms WaitForService to wait for kubelet
	I1208 01:41:55.146574 1021451 kubeadm.go:587] duration metric: took 42.566834447s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:41:55.146593 1021451 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:41:55.157426 1021451 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:41:55.157503 1021451 node_conditions.go:123] node cpu capacity is 2
	I1208 01:41:55.157532 1021451 node_conditions.go:105] duration metric: took 10.933022ms to run NodePressure ...
	I1208 01:41:55.157557 1021451 start.go:242] waiting for startup goroutines ...
	I1208 01:41:55.157587 1021451 start.go:247] waiting for cluster config update ...
	I1208 01:41:55.157618 1021451 start.go:256] writing updated cluster config ...
	I1208 01:41:55.157924 1021451 ssh_runner.go:195] Run: rm -f paused
	I1208 01:41:55.161893 1021451 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:41:55.166471 1021451 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.171481 1021451 pod_ready.go:94] pod "coredns-66bc5c9577-x7llx" is "Ready"
	I1208 01:41:55.171555 1021451 pod_ready.go:86] duration metric: took 5.004726ms for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.173981 1021451 pod_ready.go:83] waiting for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.178388 1021451 pod_ready.go:94] pod "etcd-embed-certs-172173" is "Ready"
	I1208 01:41:55.178458 1021451 pod_ready.go:86] duration metric: took 4.417117ms for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.180937 1021451 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.190992 1021451 pod_ready.go:94] pod "kube-apiserver-embed-certs-172173" is "Ready"
	I1208 01:41:55.191068 1021451 pod_ready.go:86] duration metric: took 10.047909ms for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.197761 1021451 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.565891 1021451 pod_ready.go:94] pod "kube-controller-manager-embed-certs-172173" is "Ready"
	I1208 01:41:55.565969 1021451 pod_ready.go:86] duration metric: took 368.134101ms for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:55.768793 1021451 pod_ready.go:83] waiting for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:56.165575 1021451 pod_ready.go:94] pod "kube-proxy-9sc27" is "Ready"
	I1208 01:41:56.165606 1021451 pod_ready.go:86] duration metric: took 396.780906ms for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:56.367139 1021451 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:56.766334 1021451 pod_ready.go:94] pod "kube-scheduler-embed-certs-172173" is "Ready"
	I1208 01:41:56.766368 1021451 pod_ready.go:86] duration metric: took 399.20069ms for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:41:56.766381 1021451 pod_ready.go:40] duration metric: took 1.604418003s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:41:56.829404 1021451 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:41:56.832489 1021451 out.go:179] * Done! kubectl is now configured to use "embed-certs-172173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:41:54 embed-certs-172173 crio[845]: time="2025-12-08T01:41:54.801120008Z" level=info msg="Created container 88ec7fddc4272b51aefe00f04ed13ea08616240b47e1a08d85196c556bcb58d2: kube-system/coredns-66bc5c9577-x7llx/coredns" id=456f3ebd-e5f9-4eb3-b43b-63caa24a5e33 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:41:54 embed-certs-172173 crio[845]: time="2025-12-08T01:41:54.801839992Z" level=info msg="Starting container: 88ec7fddc4272b51aefe00f04ed13ea08616240b47e1a08d85196c556bcb58d2" id=4128a4fe-1e82-4044-afae-27052ce5c2a8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:41:54 embed-certs-172173 crio[845]: time="2025-12-08T01:41:54.808993574Z" level=info msg="Started container" PID=1735 containerID=88ec7fddc4272b51aefe00f04ed13ea08616240b47e1a08d85196c556bcb58d2 description=kube-system/coredns-66bc5c9577-x7llx/coredns id=4128a4fe-1e82-4044-afae-27052ce5c2a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a2d4c1e6ac343db60a7eb55a76a76e304eddea76069a626f7ff7c8fb125aca1
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.357362087Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a44cc857-dfb5-476b-9c02-6ea9c0d67fc0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.357433891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.367762549Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6 UID:1a4b14cf-3f65-47a3-9443-2564682f3dae NetNS:/var/run/netns/cc6f77e6-95c9-4638-afd6-36c3b7ac5df5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b728}] Aliases:map[]}"
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.36780959Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.37829825Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6 UID:1a4b14cf-3f65-47a3-9443-2564682f3dae NetNS:/var/run/netns/cc6f77e6-95c9-4638-afd6-36c3b7ac5df5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b728}] Aliases:map[]}"
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.378445599Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.384463743Z" level=info msg="Ran pod sandbox a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6 with infra container: default/busybox/POD" id=a44cc857-dfb5-476b-9c02-6ea9c0d67fc0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.385644351Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1de2c10d-9f55-47e7-a511-5fa53e071d2e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.38576729Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1de2c10d-9f55-47e7-a511-5fa53e071d2e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.385822084Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1de2c10d-9f55-47e7-a511-5fa53e071d2e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.386748257Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1abba94a-f914-4892-9327-7614f358d8e3 name=/runtime.v1.ImageService/PullImage
	Dec 08 01:41:57 embed-certs-172173 crio[845]: time="2025-12-08T01:41:57.389124275Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.457385245Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1abba94a-f914-4892-9327-7614f358d8e3 name=/runtime.v1.ImageService/PullImage
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.458126817Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4c1ed4f1-6972-40a1-8abb-50d4d2d6da02 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.45982547Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5fa3b913-9b12-4243-9e35-bc00edfaa578 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.46522046Z" level=info msg="Creating container: default/busybox/busybox" id=b4253c45-830f-4f87-863c-1bb6e5bbd307 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.465354205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.469925949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.470532347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.484698087Z" level=info msg="Created container 18e08e2c84fcd62ba33c8b83ca1aa1107c1c343af61e6eef744fcf9d7d3a6dd2: default/busybox/busybox" id=b4253c45-830f-4f87-863c-1bb6e5bbd307 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.487070594Z" level=info msg="Starting container: 18e08e2c84fcd62ba33c8b83ca1aa1107c1c343af61e6eef744fcf9d7d3a6dd2" id=9db808c7-ea82-4c90-a2ae-e53a51377e60 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:41:59 embed-certs-172173 crio[845]: time="2025-12-08T01:41:59.489009807Z" level=info msg="Started container" PID=1794 containerID=18e08e2c84fcd62ba33c8b83ca1aa1107c1c343af61e6eef744fcf9d7d3a6dd2 description=default/busybox/busybox id=9db808c7-ea82-4c90-a2ae-e53a51377e60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	18e08e2c84fcd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   a4d290272c1c5       busybox                                      default
	88ec7fddc4272       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   3a2d4c1e6ac34       coredns-66bc5c9577-x7llx                     kube-system
	ef6a0a9a9ef93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   58d5dd9aa74e3       storage-provisioner                          kube-system
	64441669cfb4f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      54 seconds ago       Running             kube-proxy                0                   5cd320d9b889d       kube-proxy-9sc27                             kube-system
	4fa15343cbb48       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   48598c8a8eba7       kindnet-4vjcm                                kube-system
	c258d70e0786e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      About a minute ago   Running             kube-apiserver            0                   28ce7a5039e35       kube-apiserver-embed-certs-172173            kube-system
	11ba03f32d9a4       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      About a minute ago   Running             kube-scheduler            0                   51925f6d90876       kube-scheduler-embed-certs-172173            kube-system
	399852ed232c9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      About a minute ago   Running             kube-controller-manager   0                   e041ce4c1c459       kube-controller-manager-embed-certs-172173   kube-system
	4218827cf3d07       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      About a minute ago   Running             etcd                      0                   2c3c9ae169e28       etcd-embed-certs-172173                      kube-system
	
	
	==> coredns [88ec7fddc4272b51aefe00f04ed13ea08616240b47e1a08d85196c556bcb58d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48522 - 29662 "HINFO IN 2050595836463499384.6566896987887741528. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024949877s
	
	
	==> describe nodes <==
	Name:               embed-certs-172173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-172173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-172173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-172173
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:42:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:42:07 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:42:07 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:42:07 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:42:07 +0000   Mon, 08 Dec 2025 01:41:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-172173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                da830dae-e898-43b3-845a-5a58d5a8ce98
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-x7llx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-172173                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-4vjcm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-172173             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-172173    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-9sc27                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-172173             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-172173 event: Registered Node embed-certs-172173 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-172173 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 8 01:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4218827cf3d075e900694965c3df45cfeb0c0092a933166ff4b357e813b599fd] <==
	{"level":"warn","ts":"2025-12-08T01:41:01.157712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.211534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.277948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.328691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.363437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.396044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.519197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.553553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T01:41:01.578978Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-08T01:41:01.579167Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"warn","ts":"2025-12-08T01:41:01.631871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.632101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.723599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.749086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.817095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.867631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.911498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:01.944521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.015539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.069188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.169395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.193452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.243059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.261898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:41:02.520215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44748","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:42:08 up  6:24,  0 user,  load average: 2.71, 2.87, 2.31
	Linux embed-certs-172173 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4fa15343cbb48697ec63b951dec0cbf515cca317dc1c879833525cac452ee3fa] <==
	I1208 01:41:13.726823       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:41:13.727243       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:41:13.727399       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:41:13.727439       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:41:13.727472       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:41:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:41:13.925759       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:41:13.925838       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:41:13.925873       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:41:13.926900       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:41:43.926692       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:41:43.926704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:41:43.926817       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1208 01:41:43.926950       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1208 01:41:45.527028       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:41:45.527062       1 metrics.go:72] Registering metrics
	I1208 01:41:45.527135       1 controller.go:711] "Syncing nftables rules"
	I1208 01:41:53.932440       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:41:53.932506       1 main.go:301] handling current node
	I1208 01:42:03.927766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:42:03.927929       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c258d70e0786e8b446f27b6bf310e4a684b321bc3fd4bf7f76e537100364629c] <==
	I1208 01:41:04.421649       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 01:41:04.446194       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:41:04.455389       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1208 01:41:04.478388       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1208 01:41:04.497667       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:41:04.502473       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:41:04.688164       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:41:04.804287       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1208 01:41:04.823935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1208 01:41:04.823963       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:41:05.900141       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:41:05.953435       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:41:06.061842       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1208 01:41:06.076228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1208 01:41:06.078946       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:41:06.088454       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 01:41:06.411923       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:41:06.845622       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:41:06.888751       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1208 01:41:06.900692       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1208 01:41:11.419925       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:41:11.425980       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:41:12.266201       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1208 01:41:12.391100       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1208 01:42:07.179878       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:32956: use of closed network connection
	
	
	==> kube-controller-manager [399852ed232c9d145b75e5c88a80b1ac3373fea7b07486959721b2dcceb757b9] <==
	I1208 01:41:11.441480       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:41:11.449637       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1208 01:41:11.452017       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:41:11.452189       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:41:11.452285       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-172173"
	I1208 01:41:11.452341       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1208 01:41:11.457215       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:41:11.457335       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 01:41:11.457437       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:41:11.457469       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:41:11.457507       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:41:11.459111       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:41:11.459204       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1208 01:41:11.459235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 01:41:11.459952       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1208 01:41:11.460232       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1208 01:41:11.460260       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 01:41:11.460305       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1208 01:41:11.460685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:41:11.461765       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:41:11.465720       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1208 01:41:11.469271       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 01:41:11.470429       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 01:41:11.470735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:41:56.459471       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [64441669cfb4f97aa1c47d4ccec8f31fc3759f322e0ea75139de022099650284] <==
	I1208 01:41:14.231802       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:41:14.312276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:41:14.412989       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:41:14.413027       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:41:14.413117       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:41:14.431259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:41:14.431323       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:41:14.435854       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:41:14.436210       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:41:14.436234       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:41:14.437505       1 config.go:200] "Starting service config controller"
	I1208 01:41:14.437528       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:41:14.440940       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:41:14.441021       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:41:14.441065       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:41:14.441093       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:41:14.441406       1 config.go:309] "Starting node config controller"
	I1208 01:41:14.441428       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:41:14.538622       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:41:14.541878       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:41:14.541939       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:41:14.541958       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [11ba03f32d9a4060bfc033170b79a4ac43150000eb3831ffdda22e1bb3b6e733] <==
	I1208 01:41:02.679158       1 serving.go:386] Generated self-signed cert in-memory
	W1208 01:41:05.552510       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 01:41:05.552617       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 01:41:05.552653       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 01:41:05.552684       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 01:41:05.587440       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:41:05.587540       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:41:05.591267       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:41:05.593950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:41:05.594040       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:41:05.592919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1208 01:41:05.604099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 01:41:05.621851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1208 01:41:06.794821       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: E1208 01:41:12.349045    1308 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-9sc27\" is forbidden: User \"system:node:embed-certs-172173\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-172173' and this object" podUID="cc6e0d94-5099-42d5-8c6f-fd2e7d912354" pod="kube-system/kube-proxy-9sc27"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: E1208 01:41:12.349069    1308 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-172173\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-172173' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: E1208 01:41:12.349148    1308 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-172173\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-172173' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404469    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-xtables-lock\") pod \"kube-proxy-9sc27\" (UID: \"cc6e0d94-5099-42d5-8c6f-fd2e7d912354\") " pod="kube-system/kube-proxy-9sc27"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404569    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-lib-modules\") pod \"kube-proxy-9sc27\" (UID: \"cc6e0d94-5099-42d5-8c6f-fd2e7d912354\") " pod="kube-system/kube-proxy-9sc27"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404605    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l6p\" (UniqueName: \"kubernetes.io/projected/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-kube-api-access-l2l6p\") pod \"kube-proxy-9sc27\" (UID: \"cc6e0d94-5099-42d5-8c6f-fd2e7d912354\") " pod="kube-system/kube-proxy-9sc27"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404628    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/31a4531e-5dcf-496e-8724-99c58d72d582-cni-cfg\") pod \"kindnet-4vjcm\" (UID: \"31a4531e-5dcf-496e-8724-99c58d72d582\") " pod="kube-system/kindnet-4vjcm"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404701    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a4531e-5dcf-496e-8724-99c58d72d582-xtables-lock\") pod \"kindnet-4vjcm\" (UID: \"31a4531e-5dcf-496e-8724-99c58d72d582\") " pod="kube-system/kindnet-4vjcm"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404722    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-kube-proxy\") pod \"kube-proxy-9sc27\" (UID: \"cc6e0d94-5099-42d5-8c6f-fd2e7d912354\") " pod="kube-system/kube-proxy-9sc27"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404763    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a4531e-5dcf-496e-8724-99c58d72d582-lib-modules\") pod \"kindnet-4vjcm\" (UID: \"31a4531e-5dcf-496e-8724-99c58d72d582\") " pod="kube-system/kindnet-4vjcm"
	Dec 08 01:41:12 embed-certs-172173 kubelet[1308]: I1208 01:41:12.404785    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5t8t\" (UniqueName: \"kubernetes.io/projected/31a4531e-5dcf-496e-8724-99c58d72d582-kube-api-access-k5t8t\") pod \"kindnet-4vjcm\" (UID: \"31a4531e-5dcf-496e-8724-99c58d72d582\") " pod="kube-system/kindnet-4vjcm"
	Dec 08 01:41:13 embed-certs-172173 kubelet[1308]: E1208 01:41:13.506964    1308 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 08 01:41:13 embed-certs-172173 kubelet[1308]: E1208 01:41:13.507096    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-kube-proxy podName:cc6e0d94-5099-42d5-8c6f-fd2e7d912354 nodeName:}" failed. No retries permitted until 2025-12-08 01:41:14.007060754 +0000 UTC m=+7.264485766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cc6e0d94-5099-42d5-8c6f-fd2e7d912354-kube-proxy") pod "kube-proxy-9sc27" (UID: "cc6e0d94-5099-42d5-8c6f-fd2e7d912354") : failed to sync configmap cache: timed out waiting for the condition
	Dec 08 01:41:13 embed-certs-172173 kubelet[1308]: I1208 01:41:13.545292    1308 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 08 01:41:13 embed-certs-172173 kubelet[1308]: I1208 01:41:13.995897    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4vjcm" podStartSLOduration=1.995863137 podStartE2EDuration="1.995863137s" podCreationTimestamp="2025-12-08 01:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:41:13.979731696 +0000 UTC m=+7.237156708" watchObservedRunningTime="2025-12-08 01:41:13.995863137 +0000 UTC m=+7.253288157"
	Dec 08 01:41:16 embed-certs-172173 kubelet[1308]: I1208 01:41:16.872393    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9sc27" podStartSLOduration=4.8723643150000004 podStartE2EDuration="4.872364315s" podCreationTimestamp="2025-12-08 01:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:41:14.97289754 +0000 UTC m=+8.230322552" watchObservedRunningTime="2025-12-08 01:41:16.872364315 +0000 UTC m=+10.129789335"
	Dec 08 01:41:54 embed-certs-172173 kubelet[1308]: I1208 01:41:54.363924    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 08 01:41:54 embed-certs-172173 kubelet[1308]: I1208 01:41:54.543286    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d30228-444d-42fc-86ac-c92316e96519-config-volume\") pod \"coredns-66bc5c9577-x7llx\" (UID: \"73d30228-444d-42fc-86ac-c92316e96519\") " pod="kube-system/coredns-66bc5c9577-x7llx"
	Dec 08 01:41:54 embed-certs-172173 kubelet[1308]: I1208 01:41:54.543491    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzcw2\" (UniqueName: \"kubernetes.io/projected/73d30228-444d-42fc-86ac-c92316e96519-kube-api-access-jzcw2\") pod \"coredns-66bc5c9577-x7llx\" (UID: \"73d30228-444d-42fc-86ac-c92316e96519\") " pod="kube-system/coredns-66bc5c9577-x7llx"
	Dec 08 01:41:54 embed-certs-172173 kubelet[1308]: I1208 01:41:54.543531    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72ef2628-cf43-451a-a11e-b9657a269b7a-tmp\") pod \"storage-provisioner\" (UID: \"72ef2628-cf43-451a-a11e-b9657a269b7a\") " pod="kube-system/storage-provisioner"
	Dec 08 01:41:54 embed-certs-172173 kubelet[1308]: I1208 01:41:54.543550    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzs5\" (UniqueName: \"kubernetes.io/projected/72ef2628-cf43-451a-a11e-b9657a269b7a-kube-api-access-7rzs5\") pod \"storage-provisioner\" (UID: \"72ef2628-cf43-451a-a11e-b9657a269b7a\") " pod="kube-system/storage-provisioner"
	Dec 08 01:41:55 embed-certs-172173 kubelet[1308]: I1208 01:41:55.098876    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x7llx" podStartSLOduration=43.098827402 podStartE2EDuration="43.098827402s" podCreationTimestamp="2025-12-08 01:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:41:55.080651123 +0000 UTC m=+48.338076143" watchObservedRunningTime="2025-12-08 01:41:55.098827402 +0000 UTC m=+48.356252439"
	Dec 08 01:41:55 embed-certs-172173 kubelet[1308]: I1208 01:41:55.116418    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.116398439 podStartE2EDuration="42.116398439s" podCreationTimestamp="2025-12-08 01:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:41:55.099983403 +0000 UTC m=+48.357408431" watchObservedRunningTime="2025-12-08 01:41:55.116398439 +0000 UTC m=+48.373823459"
	Dec 08 01:41:57 embed-certs-172173 kubelet[1308]: I1208 01:41:57.161948    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqbk\" (UniqueName: \"kubernetes.io/projected/1a4b14cf-3f65-47a3-9443-2564682f3dae-kube-api-access-8jqbk\") pod \"busybox\" (UID: \"1a4b14cf-3f65-47a3-9443-2564682f3dae\") " pod="default/busybox"
	Dec 08 01:41:57 embed-certs-172173 kubelet[1308]: W1208 01:41:57.383272    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/crio-a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6 WatchSource:0}: Error finding container a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6: Status 404 returned error can't find the container with id a4d290272c1c5faf577c3a183a4b223d65925ef4ef91c9c39dbd9993e64c9ca6
	
	
	==> storage-provisioner [ef6a0a9a9ef93e7f59f4204137e0bcb1dc04bd117cee14a751832eea385713a5] <==
	I1208 01:41:54.827833       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:41:54.842911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:41:54.843024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:41:54.846050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:41:54.869750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:41:54.871088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:41:54.871338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_f49a668a-cab6-4c14-98c1-b32d9db40e11!
	I1208 01:41:54.875768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"caea3a6f-ace3-471c-929e-48c4db7a6e04", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-172173_f49a668a-cab6-4c14-98c1-b32d9db40e11 became leader
	W1208 01:41:54.880387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:41:54.885156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:41:54.971985       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_f49a668a-cab6-4c14-98c1-b32d9db40e11!
	W1208 01:41:56.888571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:41:56.896275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:41:58.899731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:41:58.904419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:00.907997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:00.912433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:02.915731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:02.920492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:04.923773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:04.930616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:06.933471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:06.937924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:08.951132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:42:08.963063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-172173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-172173 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-172173 --alsologtostderr -v=1: exit status 80 (1.765906919s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-172173 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:43:24.102503 1030244 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:24.102693 1030244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:24.102725 1030244 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:24.102767 1030244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:24.103092 1030244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:43:24.103396 1030244 out.go:368] Setting JSON to false
	I1208 01:43:24.103457 1030244 mustload.go:66] Loading cluster: embed-certs-172173
	I1208 01:43:24.103906 1030244 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:43:24.104436 1030244 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:43:24.121269 1030244 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:43:24.121605 1030244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:24.189318 1030244 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:43:24.169276385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:24.190509 1030244 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-172173 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1208 01:43:24.194124 1030244 out.go:179] * Pausing node embed-certs-172173 ... 
	I1208 01:43:24.197032 1030244 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:43:24.197403 1030244 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:24.197461 1030244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:43:24.214078 1030244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:43:24.321547 1030244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:43:24.334764 1030244 pause.go:52] kubelet running: true
	I1208 01:43:24.334927 1030244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:43:24.598631 1030244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:43:24.598726 1030244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:43:24.668146 1030244 cri.go:89] found id: "8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8"
	I1208 01:43:24.668168 1030244 cri.go:89] found id: "b682d39444ea6773ce3b8e0d3577008255ec29b6210430973c6747b5762dd436"
	I1208 01:43:24.668173 1030244 cri.go:89] found id: "d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6"
	I1208 01:43:24.668177 1030244 cri.go:89] found id: "6f096903b99a9c1b69027720920a825e2c591255e41e2782cc01568d8f8e3a7d"
	I1208 01:43:24.668181 1030244 cri.go:89] found id: "6641e3ed8e2f10cf6919f6f483e1e3a5ab0add2852e4ecb950a0589b351defff"
	I1208 01:43:24.668184 1030244 cri.go:89] found id: "30a6f430bd90bb0e784d27003f7c76a6d6f8eb4a3ee4c253ed4639b61da6174c"
	I1208 01:43:24.668187 1030244 cri.go:89] found id: "2a7068b0310ccf86fa0fb6f658593fd47dc138cfa94f10ec2b1def34ce5aa74b"
	I1208 01:43:24.668190 1030244 cri.go:89] found id: "930c0199e78964ae17dca15f3099c8b96087b69d6c5ce17e8fcc4f6cd473915c"
	I1208 01:43:24.668193 1030244 cri.go:89] found id: "145d7ece2a98fbd805f8dc4757b5d3ba2b59855339b8d7b43f11dce6d8ce759f"
	I1208 01:43:24.668198 1030244 cri.go:89] found id: "d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	I1208 01:43:24.668201 1030244 cri.go:89] found id: "e21f994700d9e791b62898f6acd0332a72ea0bee77c1b9ef572c4eb21df2040c"
	I1208 01:43:24.668204 1030244 cri.go:89] found id: ""
	I1208 01:43:24.668258 1030244 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:43:24.679264 1030244 retry.go:31] will retry after 189.579292ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:43:24Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:43:24.869626 1030244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:43:24.882232 1030244 pause.go:52] kubelet running: false
	I1208 01:43:24.882300 1030244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:43:25.074246 1030244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:43:25.074326 1030244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:43:25.149583 1030244 cri.go:89] found id: "8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8"
	I1208 01:43:25.149611 1030244 cri.go:89] found id: "b682d39444ea6773ce3b8e0d3577008255ec29b6210430973c6747b5762dd436"
	I1208 01:43:25.149615 1030244 cri.go:89] found id: "d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6"
	I1208 01:43:25.149619 1030244 cri.go:89] found id: "6f096903b99a9c1b69027720920a825e2c591255e41e2782cc01568d8f8e3a7d"
	I1208 01:43:25.149628 1030244 cri.go:89] found id: "6641e3ed8e2f10cf6919f6f483e1e3a5ab0add2852e4ecb950a0589b351defff"
	I1208 01:43:25.149632 1030244 cri.go:89] found id: "30a6f430bd90bb0e784d27003f7c76a6d6f8eb4a3ee4c253ed4639b61da6174c"
	I1208 01:43:25.149636 1030244 cri.go:89] found id: "2a7068b0310ccf86fa0fb6f658593fd47dc138cfa94f10ec2b1def34ce5aa74b"
	I1208 01:43:25.149638 1030244 cri.go:89] found id: "930c0199e78964ae17dca15f3099c8b96087b69d6c5ce17e8fcc4f6cd473915c"
	I1208 01:43:25.149642 1030244 cri.go:89] found id: "145d7ece2a98fbd805f8dc4757b5d3ba2b59855339b8d7b43f11dce6d8ce759f"
	I1208 01:43:25.149648 1030244 cri.go:89] found id: "d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	I1208 01:43:25.149652 1030244 cri.go:89] found id: "e21f994700d9e791b62898f6acd0332a72ea0bee77c1b9ef572c4eb21df2040c"
	I1208 01:43:25.149655 1030244 cri.go:89] found id: ""
	I1208 01:43:25.149712 1030244 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:43:25.160722 1030244 retry.go:31] will retry after 359.958925ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:43:25Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:43:25.521429 1030244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:43:25.535564 1030244 pause.go:52] kubelet running: false
	I1208 01:43:25.535662 1030244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:43:25.708479 1030244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:43:25.708601 1030244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:43:25.783627 1030244 cri.go:89] found id: "8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8"
	I1208 01:43:25.783693 1030244 cri.go:89] found id: "b682d39444ea6773ce3b8e0d3577008255ec29b6210430973c6747b5762dd436"
	I1208 01:43:25.783710 1030244 cri.go:89] found id: "d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6"
	I1208 01:43:25.783727 1030244 cri.go:89] found id: "6f096903b99a9c1b69027720920a825e2c591255e41e2782cc01568d8f8e3a7d"
	I1208 01:43:25.783745 1030244 cri.go:89] found id: "6641e3ed8e2f10cf6919f6f483e1e3a5ab0add2852e4ecb950a0589b351defff"
	I1208 01:43:25.783771 1030244 cri.go:89] found id: "30a6f430bd90bb0e784d27003f7c76a6d6f8eb4a3ee4c253ed4639b61da6174c"
	I1208 01:43:25.783790 1030244 cri.go:89] found id: "2a7068b0310ccf86fa0fb6f658593fd47dc138cfa94f10ec2b1def34ce5aa74b"
	I1208 01:43:25.783804 1030244 cri.go:89] found id: "930c0199e78964ae17dca15f3099c8b96087b69d6c5ce17e8fcc4f6cd473915c"
	I1208 01:43:25.783819 1030244 cri.go:89] found id: "145d7ece2a98fbd805f8dc4757b5d3ba2b59855339b8d7b43f11dce6d8ce759f"
	I1208 01:43:25.783839 1030244 cri.go:89] found id: "d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	I1208 01:43:25.783853 1030244 cri.go:89] found id: "e21f994700d9e791b62898f6acd0332a72ea0bee77c1b9ef572c4eb21df2040c"
	I1208 01:43:25.783876 1030244 cri.go:89] found id: ""
	I1208 01:43:25.783950 1030244 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:43:25.798190 1030244 out.go:203] 
	W1208 01:43:25.801191 1030244 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:43:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:43:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 01:43:25.801273 1030244 out.go:285] * 
	* 
	W1208 01:43:25.808428 1030244 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:43:25.811563 1030244 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-172173 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-172173
helpers_test.go:243: (dbg) docker inspect embed-certs-172173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	        "Created": "2025-12-08T01:40:36.846301629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1027676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:42:22.190906961Z",
	            "FinishedAt": "2025-12-08T01:42:21.341902826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hosts",
	        "LogPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c-json.log",
	        "Name": "/embed-certs-172173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-172173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-172173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	                "LowerDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-172173",
	                "Source": "/var/lib/docker/volumes/embed-certs-172173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-172173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-172173",
	                "name.minikube.sigs.k8s.io": "embed-certs-172173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19384b450d5037821f0959638311bb5b50088537940d788ff09a1e9da3262e16",
	            "SandboxKey": "/var/run/docker/netns/19384b450d50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-172173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:8b:7b:c0:3f:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d59524493acc02b4052a1b21ae1c4be3dd0f7ef0214fbeda13b3fc44e2ef94",
	                    "EndpointID": "19737162c8e23cd0b4a98eb0719602a4a702aef3a4da029433f7c21c7856ca42",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-172173",
	                        "5f1be8b9f8b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173: exit status 2 (354.07972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25: (1.253955822s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831        │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                   │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:42:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:42:21.875858 1027543 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:42:21.876000 1027543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:42:21.876011 1027543 out.go:374] Setting ErrFile to fd 2...
	I1208 01:42:21.876017 1027543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:42:21.876261 1027543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:42:21.876618 1027543 out.go:368] Setting JSON to false
	I1208 01:42:21.877565 1027543 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23074,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:42:21.877644 1027543 start.go:143] virtualization:  
	I1208 01:42:21.880760 1027543 out.go:179] * [embed-certs-172173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:42:21.884667 1027543 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:42:21.884740 1027543 notify.go:221] Checking for updates...
	I1208 01:42:21.890913 1027543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:42:21.893861 1027543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:21.896848 1027543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:42:21.899833 1027543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:42:21.902708 1027543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:42:21.906024 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:21.906590 1027543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:42:21.943138 1027543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:42:21.943377 1027543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:42:22.017290 1027543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:42:21.997423575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:42:22.017427 1027543 docker.go:319] overlay module found
	I1208 01:42:22.020654 1027543 out.go:179] * Using the docker driver based on existing profile
	I1208 01:42:22.023756 1027543 start.go:309] selected driver: docker
	I1208 01:42:22.023785 1027543 start.go:927] validating driver "docker" against &{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:22.023899 1027543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:42:22.024641 1027543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:42:22.108047 1027543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:42:22.09862316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:42:22.108396 1027543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:42:22.108436 1027543 cni.go:84] Creating CNI manager for ""
	I1208 01:42:22.108502 1027543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:42:22.108539 1027543 start.go:353] cluster config:
	{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:22.111692 1027543 out.go:179] * Starting "embed-certs-172173" primary control-plane node in "embed-certs-172173" cluster
	I1208 01:42:22.114508 1027543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:42:22.117327 1027543 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:42:22.120154 1027543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:42:22.120203 1027543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:42:22.120215 1027543 cache.go:65] Caching tarball of preloaded images
	I1208 01:42:22.120230 1027543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:42:22.120301 1027543 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:42:22.120311 1027543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:42:22.120425 1027543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:42:22.139874 1027543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:42:22.139894 1027543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:42:22.139919 1027543 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:42:22.139950 1027543 start.go:360] acquireMachinesLock for embed-certs-172173: {Name:mk1784cff2b700f98514e7f93e65851ad3664475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:42:22.140021 1027543 start.go:364] duration metric: took 42.954µs to acquireMachinesLock for "embed-certs-172173"
	I1208 01:42:22.140046 1027543 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:42:22.140056 1027543 fix.go:54] fixHost starting: 
	I1208 01:42:22.140334 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:22.156821 1027543 fix.go:112] recreateIfNeeded on embed-certs-172173: state=Stopped err=<nil>
	W1208 01:42:22.156852 1027543 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:42:22.159949 1027543 out.go:252] * Restarting existing docker container for "embed-certs-172173" ...
	I1208 01:42:22.160039 1027543 cli_runner.go:164] Run: docker start embed-certs-172173
	I1208 01:42:22.415051 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:22.437651 1027543 kic.go:430] container "embed-certs-172173" state is running.
	I1208 01:42:22.438368 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:22.472370 1027543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:42:22.472616 1027543 machine.go:94] provisionDockerMachine start ...
	I1208 01:42:22.472688 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:22.496166 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:22.496506 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:22.496515 1027543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:42:22.497137 1027543 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40266->127.0.0.1:33792: read: connection reset by peer
	I1208 01:42:25.654095 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:42:25.654121 1027543 ubuntu.go:182] provisioning hostname "embed-certs-172173"
	I1208 01:42:25.654205 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:25.670941 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:25.671241 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:25.671252 1027543 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-172173 && echo "embed-certs-172173" | sudo tee /etc/hostname
	I1208 01:42:25.831774 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:42:25.831854 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:25.849333 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:25.849673 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:25.849690 1027543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-172173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-172173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-172173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:42:26.007180 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:42:26.007287 1027543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:42:26.007348 1027543 ubuntu.go:190] setting up certificates
	I1208 01:42:26.007388 1027543 provision.go:84] configureAuth start
	I1208 01:42:26.007487 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:26.025540 1027543 provision.go:143] copyHostCerts
	I1208 01:42:26.025618 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:42:26.025632 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:42:26.025713 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:42:26.025814 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:42:26.025819 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:42:26.025844 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:42:26.025904 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:42:26.025909 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:42:26.025933 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:42:26.025984 1027543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.embed-certs-172173 san=[127.0.0.1 192.168.85.2 embed-certs-172173 localhost minikube]
	I1208 01:42:26.290641 1027543 provision.go:177] copyRemoteCerts
	I1208 01:42:26.290722 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:42:26.290803 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.309335 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:26.414818 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:42:26.432436 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:42:26.449011 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 01:42:26.466354 1027543 provision.go:87] duration metric: took 458.929604ms to configureAuth
	I1208 01:42:26.466380 1027543 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:42:26.466575 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:26.466693 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.483663 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:26.483980 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:26.484003 1027543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:42:26.861560 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:42:26.861579 1027543 machine.go:97] duration metric: took 4.388953602s to provisionDockerMachine
	I1208 01:42:26.861591 1027543 start.go:293] postStartSetup for "embed-certs-172173" (driver="docker")
	I1208 01:42:26.861602 1027543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:42:26.861661 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:42:26.861720 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.881905 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:26.987589 1027543 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:42:26.991115 1027543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:42:26.991142 1027543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:42:26.991154 1027543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:42:26.991209 1027543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:42:26.991306 1027543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:42:26.991433 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:42:26.999425 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:42:27.021220 1027543 start.go:296] duration metric: took 159.612498ms for postStartSetup
	I1208 01:42:27.021301 1027543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:42:27.021340 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.038192 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.139937 1027543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:42:27.144814 1027543 fix.go:56] duration metric: took 5.004750438s for fixHost
	I1208 01:42:27.144840 1027543 start.go:83] releasing machines lock for "embed-certs-172173", held for 5.004805856s
	I1208 01:42:27.144911 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:27.161539 1027543 ssh_runner.go:195] Run: cat /version.json
	I1208 01:42:27.161551 1027543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:42:27.161592 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.161607 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.180794 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.192372 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.386811 1027543 ssh_runner.go:195] Run: systemctl --version
	I1208 01:42:27.393287 1027543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:42:27.429548 1027543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:42:27.433921 1027543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:42:27.433995 1027543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:42:27.441769 1027543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:42:27.441793 1027543 start.go:496] detecting cgroup driver to use...
	I1208 01:42:27.441855 1027543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:42:27.441918 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:42:27.457070 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:42:27.470184 1027543 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:42:27.470253 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:42:27.486381 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:42:27.499737 1027543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:42:27.630533 1027543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:42:27.761023 1027543 docker.go:234] disabling docker service ...
	I1208 01:42:27.761167 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:42:27.776138 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:42:27.788834 1027543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:42:27.910751 1027543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:42:28.034998 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:42:28.050309 1027543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:42:28.066175 1027543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:42:28.066270 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.076534 1027543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:42:28.076646 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.085608 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.094638 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.104362 1027543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:42:28.112889 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.122460 1027543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.131311 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.140492 1027543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:42:28.148441 1027543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:42:28.156016 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:28.286893 1027543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:42:28.457270 1027543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:42:28.457370 1027543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:42:28.461140 1027543 start.go:564] Will wait 60s for crictl version
	I1208 01:42:28.461212 1027543 ssh_runner.go:195] Run: which crictl
	I1208 01:42:28.464739 1027543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:42:28.494050 1027543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:42:28.494134 1027543 ssh_runner.go:195] Run: crio --version
	I1208 01:42:28.524110 1027543 ssh_runner.go:195] Run: crio --version
	I1208 01:42:28.555889 1027543 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:42:28.558644 1027543 cli_runner.go:164] Run: docker network inspect embed-certs-172173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:42:28.575486 1027543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:42:28.579400 1027543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:42:28.588889 1027543 kubeadm.go:884] updating cluster {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:42:28.589012 1027543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:42:28.589070 1027543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:42:28.622110 1027543 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:42:28.622134 1027543 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:42:28.622209 1027543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:42:28.649109 1027543 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:42:28.649132 1027543 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:42:28.649141 1027543 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:42:28.649241 1027543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-172173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:42:28.651510 1027543 ssh_runner.go:195] Run: crio config
	I1208 01:42:28.706905 1027543 cni.go:84] Creating CNI manager for ""
	I1208 01:42:28.706931 1027543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:42:28.706971 1027543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:42:28.707005 1027543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-172173 NodeName:embed-certs-172173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:42:28.707144 1027543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-172173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:42:28.707221 1027543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:42:28.714727 1027543 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:42:28.714862 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:42:28.722321 1027543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1208 01:42:28.735016 1027543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:42:28.747941 1027543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1208 01:42:28.765164 1027543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:42:28.769778 1027543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:42:28.780997 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:28.909451 1027543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:42:28.926195 1027543 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173 for IP: 192.168.85.2
	I1208 01:42:28.926268 1027543 certs.go:195] generating shared ca certs ...
	I1208 01:42:28.926297 1027543 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:28.926503 1027543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:42:28.926579 1027543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:42:28.926619 1027543 certs.go:257] generating profile certs ...
	I1208 01:42:28.926755 1027543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.key
	I1208 01:42:28.926874 1027543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7
	I1208 01:42:28.926951 1027543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key
	I1208 01:42:28.927101 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:42:28.927168 1027543 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:42:28.927192 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:42:28.927251 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:42:28.927305 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:42:28.927362 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:42:28.927437 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:42:28.928133 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:42:28.952187 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:42:28.969492 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:42:28.986904 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:42:29.006606 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1208 01:42:29.026890 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:42:29.047535 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:42:29.067191 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:42:29.096337 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:42:29.119453 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:42:29.141443 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:42:29.162976 1027543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:42:29.177403 1027543 ssh_runner.go:195] Run: openssl version
	I1208 01:42:29.183734 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.191304 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:42:29.199055 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.202815 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.202903 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.245162 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:42:29.252937 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.260550 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:42:29.268517 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.272543 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.272611 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.314198 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:42:29.321549 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.328960 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:42:29.336553 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.340403 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.340469 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.381475 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:42:29.388863 1027543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:42:29.392556 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:42:29.433500 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:42:29.474739 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:42:29.516076 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:42:29.557587 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:42:29.615877 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:42:29.660816 1027543 kubeadm.go:401] StartCluster: {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:29.660975 1027543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:42:29.661079 1027543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:42:29.707140 1027543 cri.go:89] found id: ""
	I1208 01:42:29.707265 1027543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:42:29.716393 1027543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:42:29.716461 1027543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:42:29.716558 1027543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:42:29.728070 1027543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:42:29.728530 1027543 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-172173" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:29.728715 1027543 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-172173" cluster setting kubeconfig missing "embed-certs-172173" context setting]
	I1208 01:42:29.729077 1027543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.730443 1027543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:42:29.742997 1027543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:42:29.743035 1027543 kubeadm.go:602] duration metric: took 26.553299ms to restartPrimaryControlPlane
	I1208 01:42:29.743046 1027543 kubeadm.go:403] duration metric: took 82.239609ms to StartCluster
	I1208 01:42:29.743061 1027543 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.743121 1027543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:29.744169 1027543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.744390 1027543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:42:29.744703 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:29.744747 1027543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:42:29.744812 1027543 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-172173"
	I1208 01:42:29.744827 1027543 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-172173"
	W1208 01:42:29.744838 1027543 addons.go:248] addon storage-provisioner should already be in state true
	I1208 01:42:29.744859 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.745286 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.745583 1027543 addons.go:70] Setting dashboard=true in profile "embed-certs-172173"
	I1208 01:42:29.745632 1027543 addons.go:239] Setting addon dashboard=true in "embed-certs-172173"
	W1208 01:42:29.745644 1027543 addons.go:248] addon dashboard should already be in state true
	I1208 01:42:29.745666 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.746069 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.748714 1027543 addons.go:70] Setting default-storageclass=true in profile "embed-certs-172173"
	I1208 01:42:29.748779 1027543 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-172173"
	I1208 01:42:29.749206 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.749364 1027543 out.go:179] * Verifying Kubernetes components...
	I1208 01:42:29.752688 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:29.809333 1027543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:42:29.812672 1027543 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:42:29.812699 1027543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:42:29.812772 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.823426 1027543 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:42:29.824256 1027543 addons.go:239] Setting addon default-storageclass=true in "embed-certs-172173"
	W1208 01:42:29.824278 1027543 addons.go:248] addon default-storageclass should already be in state true
	I1208 01:42:29.824302 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.824746 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.833961 1027543 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:42:29.837100 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:42:29.837126 1027543 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:42:29.837202 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.863001 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:29.894022 1027543 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:42:29.894042 1027543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:42:29.894123 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.894875 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:29.927227 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:30.104362 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:42:30.104399 1027543 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:42:30.158439 1027543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:42:30.177181 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:42:30.177224 1027543 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:42:30.211605 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:42:30.218909 1027543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:42:30.230392 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:42:30.241188 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:42:30.241225 1027543 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:42:30.287291 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:42:30.287315 1027543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:42:30.376650 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:42:30.376692 1027543 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:42:30.440251 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:42:30.440277 1027543 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:42:30.505664 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:42:30.505691 1027543 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:42:30.543000 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:42:30.543043 1027543 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:42:30.567295 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:42:30.567334 1027543 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:42:30.589701 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:42:33.828041 1027543 node_ready.go:49] node "embed-certs-172173" is "Ready"
	I1208 01:42:33.828115 1027543 node_ready.go:38] duration metric: took 3.609151382s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:42:33.828144 1027543 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:42:33.828221 1027543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:42:35.601848 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.390195742s)
	I1208 01:42:35.601901 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.371485682s)
	I1208 01:42:35.760052 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.170307253s)
	I1208 01:42:35.760270 1027543 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.932012992s)
	I1208 01:42:35.760364 1027543 api_server.go:72] duration metric: took 6.015943154s to wait for apiserver process to appear ...
	I1208 01:42:35.760423 1027543 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:42:35.760458 1027543 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:42:35.763126 1027543 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-172173 addons enable metrics-server
	
	I1208 01:42:35.766465 1027543 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1208 01:42:35.769441 1027543 addons.go:530] duration metric: took 6.024685077s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1208 01:42:35.769478 1027543 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:42:35.770658 1027543 api_server.go:141] control plane version: v1.34.2
	I1208 01:42:35.770746 1027543 api_server.go:131] duration metric: took 10.299168ms to wait for apiserver health ...
	I1208 01:42:35.770771 1027543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:42:35.777278 1027543 system_pods.go:59] 8 kube-system pods found
	I1208 01:42:35.777322 1027543 system_pods.go:61] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:42:35.777332 1027543 system_pods.go:61] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:42:35.777339 1027543 system_pods.go:61] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:42:35.777346 1027543 system_pods.go:61] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:42:35.777352 1027543 system_pods.go:61] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:42:35.777356 1027543 system_pods.go:61] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:42:35.777362 1027543 system_pods.go:61] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:42:35.777366 1027543 system_pods.go:61] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Running
	I1208 01:42:35.777372 1027543 system_pods.go:74] duration metric: took 6.582917ms to wait for pod list to return data ...
	I1208 01:42:35.777379 1027543 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:42:35.783847 1027543 default_sa.go:45] found service account: "default"
	I1208 01:42:35.783915 1027543 default_sa.go:55] duration metric: took 6.529336ms for default service account to be created ...
	I1208 01:42:35.783940 1027543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:42:35.788069 1027543 system_pods.go:86] 8 kube-system pods found
	I1208 01:42:35.788157 1027543 system_pods.go:89] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:42:35.788182 1027543 system_pods.go:89] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:42:35.788229 1027543 system_pods.go:89] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:42:35.788256 1027543 system_pods.go:89] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:42:35.788282 1027543 system_pods.go:89] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:42:35.788314 1027543 system_pods.go:89] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:42:35.788340 1027543 system_pods.go:89] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:42:35.788361 1027543 system_pods.go:89] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Running
	I1208 01:42:35.788396 1027543 system_pods.go:126] duration metric: took 4.435668ms to wait for k8s-apps to be running ...
	I1208 01:42:35.788420 1027543 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:42:35.788510 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:42:35.805985 1027543 system_svc.go:56] duration metric: took 17.555019ms WaitForService to wait for kubelet
	I1208 01:42:35.806059 1027543 kubeadm.go:587] duration metric: took 6.061636671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:42:35.806091 1027543 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:42:35.816639 1027543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:42:35.816724 1027543 node_conditions.go:123] node cpu capacity is 2
	I1208 01:42:35.816751 1027543 node_conditions.go:105] duration metric: took 10.642687ms to run NodePressure ...
	I1208 01:42:35.816777 1027543 start.go:242] waiting for startup goroutines ...
	I1208 01:42:35.816818 1027543 start.go:247] waiting for cluster config update ...
	I1208 01:42:35.816842 1027543 start.go:256] writing updated cluster config ...
	I1208 01:42:35.817197 1027543 ssh_runner.go:195] Run: rm -f paused
	I1208 01:42:35.821305 1027543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:42:35.827083 1027543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 01:42:37.870797 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:40.333162 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:42.338946 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:44.834039 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:46.835576 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:49.332637 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:51.833329 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:54.332448 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:56.332857 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:58.832919 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:00.833400 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:03.332948 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:05.344161 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:07.833155 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:10.333937 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	I1208 01:43:10.832192 1027543 pod_ready.go:94] pod "coredns-66bc5c9577-x7llx" is "Ready"
	I1208 01:43:10.832224 1027543 pod_ready.go:86] duration metric: took 35.005064052s for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.835051 1027543 pod_ready.go:83] waiting for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.839841 1027543 pod_ready.go:94] pod "etcd-embed-certs-172173" is "Ready"
	I1208 01:43:10.839864 1027543 pod_ready.go:86] duration metric: took 4.783257ms for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.842048 1027543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.847048 1027543 pod_ready.go:94] pod "kube-apiserver-embed-certs-172173" is "Ready"
	I1208 01:43:10.847079 1027543 pod_ready.go:86] duration metric: took 4.956117ms for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.849658 1027543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.031262 1027543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-172173" is "Ready"
	I1208 01:43:11.031291 1027543 pod_ready.go:86] duration metric: took 181.609242ms for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.230764 1027543 pod_ready.go:83] waiting for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.630611 1027543 pod_ready.go:94] pod "kube-proxy-9sc27" is "Ready"
	I1208 01:43:11.630684 1027543 pod_ready.go:86] duration metric: took 399.891989ms for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.830716 1027543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:12.231316 1027543 pod_ready.go:94] pod "kube-scheduler-embed-certs-172173" is "Ready"
	I1208 01:43:12.231348 1027543 pod_ready.go:86] duration metric: took 400.563783ms for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:12.231366 1027543 pod_ready.go:40] duration metric: took 36.409980475s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:43:12.295811 1027543 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:43:12.298798 1027543 out.go:179] * Done! kubectl is now configured to use "embed-certs-172173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.320009219Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=394b1dab-32d1-4352-abbf-f2f8f75f8b8e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.321315186Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bb09c5ba-1e8c-4e0b-8646-e9e6836f9966 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.321564174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.333881415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.334072293Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b987fed4063f950a290f89834efd11834e414efdc2d239c3a9d66f74df93b714/merged/etc/passwd: no such file or directory"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.334095513Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b987fed4063f950a290f89834efd11834e414efdc2d239c3a9d66f74df93b714/merged/etc/group: no such file or directory"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.33439246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.35131301Z" level=info msg="Created container 8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8: kube-system/storage-provisioner/storage-provisioner" id=bb09c5ba-1e8c-4e0b-8646-e9e6836f9966 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.352045038Z" level=info msg="Starting container: 8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8" id=0fcbe443-7464-47c1-a997-e6767d455ca0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.355926329Z" level=info msg="Started container" PID=1652 containerID=8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8 description=kube-system/storage-provisioner/storage-provisioner id=0fcbe443-7464-47c1-a997-e6767d455ca0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=727331193ba3bd5d5525b3878f094e7ba6d36ca41ba79fed78ee4c104f4e6869
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.023982579Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.029330396Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.02937116Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.029398171Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033534078Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033595305Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033704533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037758445Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037808373Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037836378Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041666024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041853415Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041897773Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.045585904Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.045626241Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c7177987e083       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   727331193ba3b       storage-provisioner                          kube-system
	d7d4e2bf67ee0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   db507576aad3f       dashboard-metrics-scraper-6ffb444bf9-zb6wr   kubernetes-dashboard
	e21f994700d9e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   076c8598a8942       kubernetes-dashboard-855c9754f9-2jsh6        kubernetes-dashboard
	b682d39444ea6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   04b72b3ef85a3       coredns-66bc5c9577-x7llx                     kube-system
	68b628b6ac6bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   8bb9b3b870a1a       busybox                                      default
	d19a6ad4b22d7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   727331193ba3b       storage-provisioner                          kube-system
	6f096903b99a9       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           52 seconds ago      Running             kube-proxy                  1                   717320e7d01f4       kube-proxy-9sc27                             kube-system
	6641e3ed8e2f1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   5fb6322c197c9       kindnet-4vjcm                                kube-system
	30a6f430bd90b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           57 seconds ago      Running             etcd                        1                   c5205db681042       etcd-embed-certs-172173                      kube-system
	2a7068b0310cc       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           57 seconds ago      Running             kube-scheduler              1                   a35e32b8e9a38       kube-scheduler-embed-certs-172173            kube-system
	930c0199e7896       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           57 seconds ago      Running             kube-controller-manager     1                   0bb7dff42c6dc       kube-controller-manager-embed-certs-172173   kube-system
	145d7ece2a98f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           57 seconds ago      Running             kube-apiserver              1                   15db2451d658a       kube-apiserver-embed-certs-172173            kube-system
	
	
	==> coredns [b682d39444ea6773ce3b8e0d3577008255ec29b6210430973c6747b5762dd436] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48376 - 11352 "HINFO IN 5378690169054043430.7190237059136864101. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02247825s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-172173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-172173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-172173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-172173
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:43:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:41:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-172173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                da830dae-e898-43b3-845a-5a58d5a8ce98
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-x7llx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-embed-certs-172173                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-4vjcm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-embed-certs-172173             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-embed-certs-172173    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-9sc27                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-embed-certs-172173             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zb6wr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2jsh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m19s                  kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s                  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m19s                  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m15s                  node-controller  Node embed-certs-172173 event: Registered Node embed-certs-172173 in Controller
	  Normal   NodeReady                92s                    kubelet          Node embed-certs-172173 status is now: NodeReady
	  Normal   Starting                 57s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-172173 event: Registered Node embed-certs-172173 in Controller
	
	
	==> dmesg <==
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [30a6f430bd90bb0e784d27003f7c76a6d6f8eb4a3ee4c253ed4639b61da6174c] <==
	{"level":"warn","ts":"2025-12-08T01:42:32.272953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.298916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.327730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.347451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.363842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.375216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.400291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.416330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.457501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.468635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.488395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.511398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.538454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.557244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.584048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.601932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.618103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.634611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.650928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.671188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.692220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.715634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.731404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.751604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.843005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:43:27 up  6:25,  0 user,  load average: 1.96, 2.63, 2.27
	Linux embed-certs-172173 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6641e3ed8e2f10cf6919f6f483e1e3a5ab0add2852e4ecb950a0589b351defff] <==
	I1208 01:42:34.820584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:42:34.820812       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:42:34.820947       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:42:34.820959       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:42:34.820968       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:42:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:42:35.022012       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:42:35.022034       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:42:35.022043       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:42:35.022358       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:43:05.021802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:43:05.021967       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:43:05.022779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:43:05.022785       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:43:06.324454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:43:06.324483       1 metrics.go:72] Registering metrics
	I1208 01:43:06.324578       1 controller.go:711] "Syncing nftables rules"
	I1208 01:43:15.023550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:43:15.023672       1 main.go:301] handling current node
	I1208 01:43:25.027746       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:43:25.027777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [145d7ece2a98fbd805f8dc4757b5d3ba2b59855339b8d7b43f11dce6d8ce759f] <==
	I1208 01:42:34.051789       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:42:33.709457       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1208 01:42:34.052158       1 shared_informer.go:349] "Waiting for caches to sync" controller="crd-autoregister"
	I1208 01:42:34.052165       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 01:42:33.709549       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1208 01:42:34.052254       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1208 01:42:34.052269       1 aggregator.go:171] initial CRD sync complete...
	I1208 01:42:34.052277       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 01:42:34.052284       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:42:34.052290       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:42:34.094688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:42:34.122888       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:42:34.159415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:42:34.159483       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:42:34.249043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:42:34.586520       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:42:35.172418       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 01:42:35.354616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:42:35.528413       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:42:35.646540       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:42:35.739882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.237.16"}
	I1208 01:42:35.753490       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.164.200"}
	I1208 01:42:37.389956       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1208 01:42:37.645937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:42:37.802552       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [930c0199e78964ae17dca15f3099c8b96087b69d6c5ce17e8fcc4f6cd473915c] <==
	I1208 01:42:37.201230       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:42:37.201605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:42:37.204604       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:42:37.206798       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1208 01:42:37.211056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:42:37.213200       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 01:42:37.213295       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1208 01:42:37.216553       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1208 01:42:37.217779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:42:37.219938       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:42:37.220097       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 01:42:37.221194       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:42:37.223420       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:42:37.224564       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:42:37.231238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:42:37.231625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 01:42:37.234026       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1208 01:42:37.234270       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1208 01:42:37.234300       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1208 01:42:37.234416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:42:37.234443       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:42:37.237689       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:42:37.238815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1208 01:42:37.245119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:42:37.247338       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [6f096903b99a9c1b69027720920a825e2c591255e41e2782cc01568d8f8e3a7d] <==
	I1208 01:42:35.179050       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:42:35.376437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:42:35.576362       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:42:35.576397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:42:35.576470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:42:35.657734       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:42:35.657792       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:42:35.662804       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:42:35.663227       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:42:35.663454       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:42:35.664715       1 config.go:200] "Starting service config controller"
	I1208 01:42:35.664797       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:42:35.664840       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:42:35.664868       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:42:35.664919       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:42:35.664947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:42:35.665640       1 config.go:309] "Starting node config controller"
	I1208 01:42:35.665704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:42:35.665736       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:42:35.766936       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:42:35.766973       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:42:35.767013       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a7068b0310ccf86fa0fb6f658593fd47dc138cfa94f10ec2b1def34ce5aa74b] <==
	I1208 01:42:33.155827       1 serving.go:386] Generated self-signed cert in-memory
	I1208 01:42:35.520067       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:42:35.520108       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:42:35.553207       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1208 01:42:35.553264       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1208 01:42:35.553297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:42:35.553303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:42:35.553314       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.553321       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.553642       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:42:35.553725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:42:35.653664       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.653785       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1208 01:42:35.653805       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: I1208 01:42:37.676274     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6edf4e48-32d5-4897-93e2-da7c7ebc4886-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zb6wr\" (UID: \"6edf4e48-32d5-4897-93e2-da7c7ebc4886\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr"
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: W1208 01:42:37.902828     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/crio-db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c WatchSource:0}: Error finding container db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c: Status 404 returned error can't find the container with id db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: W1208 01:42:37.920742     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/crio-076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42 WatchSource:0}: Error finding container 076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42: Status 404 returned error can't find the container with id 076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42
	Dec 08 01:42:40 embed-certs-172173 kubelet[783]: I1208 01:42:40.578652     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:42:42 embed-certs-172173 kubelet[783]: I1208 01:42:42.230554     783 scope.go:117] "RemoveContainer" containerID="cdddcec9dff34c30858dc41367be7b703af5842af23c978f1eecf5b713d90ec7"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: I1208 01:42:43.234507     783 scope.go:117] "RemoveContainer" containerID="cdddcec9dff34c30858dc41367be7b703af5842af23c978f1eecf5b713d90ec7"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: I1208 01:42:43.234829     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: E1208 01:42:43.235001     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:44 embed-certs-172173 kubelet[783]: I1208 01:42:44.240149     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:44 embed-certs-172173 kubelet[783]: E1208 01:42:44.240925     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:46 embed-certs-172173 kubelet[783]: I1208 01:42:46.724386     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:46 embed-certs-172173 kubelet[783]: E1208 01:42:46.725337     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:47 embed-certs-172173 kubelet[783]: I1208 01:42:47.371973     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2jsh6" podStartSLOduration=1.805123984 podStartE2EDuration="10.37195324s" podCreationTimestamp="2025-12-08 01:42:37 +0000 UTC" firstStartedPulling="2025-12-08 01:42:37.927523875 +0000 UTC m=+8.995181326" lastFinishedPulling="2025-12-08 01:42:46.494353123 +0000 UTC m=+17.562010582" observedRunningTime="2025-12-08 01:42:47.27959668 +0000 UTC m=+18.347254131" watchObservedRunningTime="2025-12-08 01:42:47.37195324 +0000 UTC m=+18.439610699"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.142000     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.306550     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.306819     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: E1208 01:43:02.307069     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:05 embed-certs-172173 kubelet[783]: I1208 01:43:05.317188     783 scope.go:117] "RemoveContainer" containerID="d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6"
	Dec 08 01:43:06 embed-certs-172173 kubelet[783]: I1208 01:43:06.725136     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:06 embed-certs-172173 kubelet[783]: E1208 01:43:06.725311     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:19 embed-certs-172173 kubelet[783]: I1208 01:43:19.141391     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:19 embed-certs-172173 kubelet[783]: E1208 01:43:19.142963     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e21f994700d9e791b62898f6acd0332a72ea0bee77c1b9ef572c4eb21df2040c] <==
	2025/12/08 01:42:46 Using namespace: kubernetes-dashboard
	2025/12/08 01:42:46 Using in-cluster config to connect to apiserver
	2025/12/08 01:42:46 Using secret token for csrf signing
	2025/12/08 01:42:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:42:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:42:46 Successful initial request to the apiserver, version: v1.34.2
	2025/12/08 01:42:46 Generating JWE encryption key
	2025/12/08 01:42:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:42:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:42:46 Initializing JWE encryption key from synchronized object
	2025/12/08 01:42:46 Creating in-cluster Sidecar client
	2025/12/08 01:42:47 Serving insecurely on HTTP port: 9090
	2025/12/08 01:42:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:43:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:42:46 Starting overwatch
	
	
	==> storage-provisioner [8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8] <==
	I1208 01:43:05.371214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:43:05.385178       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:43:05.385693       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:43:05.388279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:08.843115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:13.103514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:16.702112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:19.756672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.779661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.790524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:43:22.790697       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:43:22.790893       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d!
	I1208 01:43:22.791500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"caea3a6f-ace3-471c-929e-48c4db7a6e04", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d became leader
	W1208 01:43:22.801775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.812358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:43:22.891856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d!
	W1208 01:43:24.815083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:24.821343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:26.827229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:26.832881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6] <==
	I1208 01:42:34.917121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:43:04.949249       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-172173 -n embed-certs-172173: exit status 2 (349.454215ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-172173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-172173
helpers_test.go:243: (dbg) docker inspect embed-certs-172173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	        "Created": "2025-12-08T01:40:36.846301629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1027676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:42:22.190906961Z",
	            "FinishedAt": "2025-12-08T01:42:21.341902826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/hosts",
	        "LogPath": "/var/lib/docker/containers/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c-json.log",
	        "Name": "/embed-certs-172173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-172173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-172173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c",
	                "LowerDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cbf00a16701183451963def1528c987014673378a0f6454ac444e183c2fe9eb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-172173",
	                "Source": "/var/lib/docker/volumes/embed-certs-172173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-172173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-172173",
	                "name.minikube.sigs.k8s.io": "embed-certs-172173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19384b450d5037821f0959638311bb5b50088537940d788ff09a1e9da3262e16",
	            "SandboxKey": "/var/run/docker/netns/19384b450d50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-172173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:8b:7b:c0:3f:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d59524493acc02b4052a1b21ae1c4be3dd0f7ef0214fbeda13b3fc44e2ef94",
	                    "EndpointID": "19737162c8e23cd0b4a98eb0719602a4a702aef3a4da029433f7c21c7856ca42",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-172173",
	                        "5f1be8b9f8b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173: exit status 2 (387.46998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-172173 logs -n 25: (1.30043021s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:36 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p force-systemd-env-520011                                                                                                                                                                                                                   │ force-systemd-env-520011 │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ cert-options-489608 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ ssh     │ -p cert-options-489608 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608      │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561   │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831        │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                   │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-172173       │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:42:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:42:21.875858 1027543 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:42:21.876000 1027543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:42:21.876011 1027543 out.go:374] Setting ErrFile to fd 2...
	I1208 01:42:21.876017 1027543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:42:21.876261 1027543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:42:21.876618 1027543 out.go:368] Setting JSON to false
	I1208 01:42:21.877565 1027543 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23074,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:42:21.877644 1027543 start.go:143] virtualization:  
	I1208 01:42:21.880760 1027543 out.go:179] * [embed-certs-172173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:42:21.884667 1027543 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:42:21.884740 1027543 notify.go:221] Checking for updates...
	I1208 01:42:21.890913 1027543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:42:21.893861 1027543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:21.896848 1027543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:42:21.899833 1027543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:42:21.902708 1027543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:42:21.906024 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:21.906590 1027543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:42:21.943138 1027543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:42:21.943377 1027543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:42:22.017290 1027543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:42:21.997423575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:42:22.017427 1027543 docker.go:319] overlay module found
	I1208 01:42:22.020654 1027543 out.go:179] * Using the docker driver based on existing profile
	I1208 01:42:22.023756 1027543 start.go:309] selected driver: docker
	I1208 01:42:22.023785 1027543 start.go:927] validating driver "docker" against &{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:22.023899 1027543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:42:22.024641 1027543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:42:22.108047 1027543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:42:22.09862316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:42:22.108396 1027543 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:42:22.108436 1027543 cni.go:84] Creating CNI manager for ""
	I1208 01:42:22.108502 1027543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:42:22.108539 1027543 start.go:353] cluster config:
	{Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:22.111692 1027543 out.go:179] * Starting "embed-certs-172173" primary control-plane node in "embed-certs-172173" cluster
	I1208 01:42:22.114508 1027543 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:42:22.117327 1027543 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:42:22.120154 1027543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:42:22.120203 1027543 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:42:22.120215 1027543 cache.go:65] Caching tarball of preloaded images
	I1208 01:42:22.120230 1027543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:42:22.120301 1027543 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:42:22.120311 1027543 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:42:22.120425 1027543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:42:22.139874 1027543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:42:22.139894 1027543 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:42:22.139919 1027543 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:42:22.139950 1027543 start.go:360] acquireMachinesLock for embed-certs-172173: {Name:mk1784cff2b700f98514e7f93e65851ad3664475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:42:22.140021 1027543 start.go:364] duration metric: took 42.954µs to acquireMachinesLock for "embed-certs-172173"
	I1208 01:42:22.140046 1027543 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:42:22.140056 1027543 fix.go:54] fixHost starting: 
	I1208 01:42:22.140334 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:22.156821 1027543 fix.go:112] recreateIfNeeded on embed-certs-172173: state=Stopped err=<nil>
	W1208 01:42:22.156852 1027543 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:42:22.159949 1027543 out.go:252] * Restarting existing docker container for "embed-certs-172173" ...
	I1208 01:42:22.160039 1027543 cli_runner.go:164] Run: docker start embed-certs-172173
	I1208 01:42:22.415051 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:22.437651 1027543 kic.go:430] container "embed-certs-172173" state is running.
	I1208 01:42:22.438368 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:22.472370 1027543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/config.json ...
	I1208 01:42:22.472616 1027543 machine.go:94] provisionDockerMachine start ...
	I1208 01:42:22.472688 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:22.496166 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:22.496506 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:22.496515 1027543 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:42:22.497137 1027543 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40266->127.0.0.1:33792: read: connection reset by peer
	I1208 01:42:25.654095 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:42:25.654121 1027543 ubuntu.go:182] provisioning hostname "embed-certs-172173"
	I1208 01:42:25.654205 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:25.670941 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:25.671241 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:25.671252 1027543 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-172173 && echo "embed-certs-172173" | sudo tee /etc/hostname
	I1208 01:42:25.831774 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-172173
	
	I1208 01:42:25.831854 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:25.849333 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:25.849673 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:25.849690 1027543 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-172173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-172173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-172173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:42:26.007180 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:42:26.007287 1027543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:42:26.007348 1027543 ubuntu.go:190] setting up certificates
	I1208 01:42:26.007388 1027543 provision.go:84] configureAuth start
	I1208 01:42:26.007487 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:26.025540 1027543 provision.go:143] copyHostCerts
	I1208 01:42:26.025618 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:42:26.025632 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:42:26.025713 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:42:26.025814 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:42:26.025819 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:42:26.025844 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:42:26.025904 1027543 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:42:26.025909 1027543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:42:26.025933 1027543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:42:26.025984 1027543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.embed-certs-172173 san=[127.0.0.1 192.168.85.2 embed-certs-172173 localhost minikube]
	I1208 01:42:26.290641 1027543 provision.go:177] copyRemoteCerts
	I1208 01:42:26.290722 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:42:26.290803 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.309335 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:26.414818 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:42:26.432436 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:42:26.449011 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 01:42:26.466354 1027543 provision.go:87] duration metric: took 458.929604ms to configureAuth
	I1208 01:42:26.466380 1027543 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:42:26.466575 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:26.466693 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.483663 1027543 main.go:143] libmachine: Using SSH client type: native
	I1208 01:42:26.483980 1027543 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1208 01:42:26.484003 1027543 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:42:26.861560 1027543 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:42:26.861579 1027543 machine.go:97] duration metric: took 4.388953602s to provisionDockerMachine
	I1208 01:42:26.861591 1027543 start.go:293] postStartSetup for "embed-certs-172173" (driver="docker")
	I1208 01:42:26.861602 1027543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:42:26.861661 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:42:26.861720 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:26.881905 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:26.987589 1027543 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:42:26.991115 1027543 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:42:26.991142 1027543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:42:26.991154 1027543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:42:26.991209 1027543 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:42:26.991306 1027543 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:42:26.991433 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:42:26.999425 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:42:27.021220 1027543 start.go:296] duration metric: took 159.612498ms for postStartSetup
	I1208 01:42:27.021301 1027543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:42:27.021340 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.038192 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.139937 1027543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:42:27.144814 1027543 fix.go:56] duration metric: took 5.004750438s for fixHost
	I1208 01:42:27.144840 1027543 start.go:83] releasing machines lock for "embed-certs-172173", held for 5.004805856s
	I1208 01:42:27.144911 1027543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-172173
	I1208 01:42:27.161539 1027543 ssh_runner.go:195] Run: cat /version.json
	I1208 01:42:27.161551 1027543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:42:27.161592 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.161607 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:27.180794 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.192372 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:27.386811 1027543 ssh_runner.go:195] Run: systemctl --version
	I1208 01:42:27.393287 1027543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:42:27.429548 1027543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:42:27.433921 1027543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:42:27.433995 1027543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:42:27.441769 1027543 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:42:27.441793 1027543 start.go:496] detecting cgroup driver to use...
	I1208 01:42:27.441855 1027543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:42:27.441918 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:42:27.457070 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:42:27.470184 1027543 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:42:27.470253 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:42:27.486381 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:42:27.499737 1027543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:42:27.630533 1027543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:42:27.761023 1027543 docker.go:234] disabling docker service ...
	I1208 01:42:27.761167 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:42:27.776138 1027543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:42:27.788834 1027543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:42:27.910751 1027543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:42:28.034998 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:42:28.050309 1027543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:42:28.066175 1027543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:42:28.066270 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.076534 1027543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:42:28.076646 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.085608 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.094638 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.104362 1027543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:42:28.112889 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.122460 1027543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.131311 1027543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:42:28.140492 1027543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:42:28.148441 1027543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:42:28.156016 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:28.286893 1027543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:42:28.457270 1027543 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:42:28.457370 1027543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:42:28.461140 1027543 start.go:564] Will wait 60s for crictl version
	I1208 01:42:28.461212 1027543 ssh_runner.go:195] Run: which crictl
	I1208 01:42:28.464739 1027543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:42:28.494050 1027543 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:42:28.494134 1027543 ssh_runner.go:195] Run: crio --version
	I1208 01:42:28.524110 1027543 ssh_runner.go:195] Run: crio --version
	I1208 01:42:28.555889 1027543 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:42:28.558644 1027543 cli_runner.go:164] Run: docker network inspect embed-certs-172173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:42:28.575486 1027543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:42:28.579400 1027543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:42:28.588889 1027543 kubeadm.go:884] updating cluster {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:42:28.589012 1027543 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:42:28.589070 1027543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:42:28.622110 1027543 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:42:28.622134 1027543 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:42:28.622209 1027543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:42:28.649109 1027543 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:42:28.649132 1027543 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:42:28.649141 1027543 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 01:42:28.649241 1027543 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-172173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:42:28.651510 1027543 ssh_runner.go:195] Run: crio config
	I1208 01:42:28.706905 1027543 cni.go:84] Creating CNI manager for ""
	I1208 01:42:28.706931 1027543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:42:28.706971 1027543 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:42:28.707005 1027543 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-172173 NodeName:embed-certs-172173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:42:28.707144 1027543 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-172173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:42:28.707221 1027543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:42:28.714727 1027543 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:42:28.714862 1027543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:42:28.722321 1027543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1208 01:42:28.735016 1027543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:42:28.747941 1027543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1208 01:42:28.765164 1027543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:42:28.769778 1027543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:42:28.780997 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:28.909451 1027543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:42:28.926195 1027543 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173 for IP: 192.168.85.2
	I1208 01:42:28.926268 1027543 certs.go:195] generating shared ca certs ...
	I1208 01:42:28.926297 1027543 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:28.926503 1027543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:42:28.926579 1027543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:42:28.926619 1027543 certs.go:257] generating profile certs ...
	I1208 01:42:28.926755 1027543 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/client.key
	I1208 01:42:28.926874 1027543 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key.d90ebbe7
	I1208 01:42:28.926951 1027543 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key
	I1208 01:42:28.927101 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:42:28.927168 1027543 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:42:28.927192 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:42:28.927251 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:42:28.927305 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:42:28.927362 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:42:28.927437 1027543 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:42:28.928133 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:42:28.952187 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:42:28.969492 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:42:28.986904 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:42:29.006606 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1208 01:42:29.026890 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:42:29.047535 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:42:29.067191 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/embed-certs-172173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:42:29.096337 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:42:29.119453 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:42:29.141443 1027543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:42:29.162976 1027543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:42:29.177403 1027543 ssh_runner.go:195] Run: openssl version
	I1208 01:42:29.183734 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.191304 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:42:29.199055 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.202815 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.202903 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:42:29.245162 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:42:29.252937 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.260550 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:42:29.268517 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.272543 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.272611 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:42:29.314198 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:42:29.321549 1027543 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.328960 1027543 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:42:29.336553 1027543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.340403 1027543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.340469 1027543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:42:29.381475 1027543 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:42:29.388863 1027543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:42:29.392556 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:42:29.433500 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:42:29.474739 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:42:29.516076 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:42:29.557587 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:42:29.615877 1027543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:42:29.660816 1027543 kubeadm.go:401] StartCluster: {Name:embed-certs-172173 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-172173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:42:29.660975 1027543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:42:29.661079 1027543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:42:29.707140 1027543 cri.go:89] found id: ""
	I1208 01:42:29.707265 1027543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:42:29.716393 1027543 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:42:29.716461 1027543 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:42:29.716558 1027543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:42:29.728070 1027543 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:42:29.728530 1027543 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-172173" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:29.728715 1027543 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-172173" cluster setting kubeconfig missing "embed-certs-172173" context setting]
	I1208 01:42:29.729077 1027543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.730443 1027543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:42:29.742997 1027543 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:42:29.743035 1027543 kubeadm.go:602] duration metric: took 26.553299ms to restartPrimaryControlPlane
	I1208 01:42:29.743046 1027543 kubeadm.go:403] duration metric: took 82.239609ms to StartCluster
	I1208 01:42:29.743061 1027543 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.743121 1027543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:42:29.744169 1027543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:42:29.744390 1027543 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:42:29.744703 1027543 config.go:182] Loaded profile config "embed-certs-172173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:42:29.744747 1027543 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:42:29.744812 1027543 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-172173"
	I1208 01:42:29.744827 1027543 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-172173"
	W1208 01:42:29.744838 1027543 addons.go:248] addon storage-provisioner should already be in state true
	I1208 01:42:29.744859 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.745286 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.745583 1027543 addons.go:70] Setting dashboard=true in profile "embed-certs-172173"
	I1208 01:42:29.745632 1027543 addons.go:239] Setting addon dashboard=true in "embed-certs-172173"
	W1208 01:42:29.745644 1027543 addons.go:248] addon dashboard should already be in state true
	I1208 01:42:29.745666 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.746069 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.748714 1027543 addons.go:70] Setting default-storageclass=true in profile "embed-certs-172173"
	I1208 01:42:29.748779 1027543 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-172173"
	I1208 01:42:29.749206 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.749364 1027543 out.go:179] * Verifying Kubernetes components...
	I1208 01:42:29.752688 1027543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:42:29.809333 1027543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:42:29.812672 1027543 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:42:29.812699 1027543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:42:29.812772 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.823426 1027543 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:42:29.824256 1027543 addons.go:239] Setting addon default-storageclass=true in "embed-certs-172173"
	W1208 01:42:29.824278 1027543 addons.go:248] addon default-storageclass should already be in state true
	I1208 01:42:29.824302 1027543 host.go:66] Checking if "embed-certs-172173" exists ...
	I1208 01:42:29.824746 1027543 cli_runner.go:164] Run: docker container inspect embed-certs-172173 --format={{.State.Status}}
	I1208 01:42:29.833961 1027543 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:42:29.837100 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:42:29.837126 1027543 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:42:29.837202 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.863001 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:29.894022 1027543 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:42:29.894042 1027543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:42:29.894123 1027543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-172173
	I1208 01:42:29.894875 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:29.927227 1027543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/embed-certs-172173/id_rsa Username:docker}
	I1208 01:42:30.104362 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:42:30.104399 1027543 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:42:30.158439 1027543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:42:30.177181 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:42:30.177224 1027543 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:42:30.211605 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:42:30.218909 1027543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:42:30.230392 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:42:30.241188 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:42:30.241225 1027543 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:42:30.287291 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:42:30.287315 1027543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:42:30.376650 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:42:30.376692 1027543 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:42:30.440251 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:42:30.440277 1027543 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:42:30.505664 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:42:30.505691 1027543 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:42:30.543000 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:42:30.543043 1027543 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:42:30.567295 1027543 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:42:30.567334 1027543 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:42:30.589701 1027543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:42:33.828041 1027543 node_ready.go:49] node "embed-certs-172173" is "Ready"
	I1208 01:42:33.828115 1027543 node_ready.go:38] duration metric: took 3.609151382s for node "embed-certs-172173" to be "Ready" ...
	I1208 01:42:33.828144 1027543 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:42:33.828221 1027543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:42:35.601848 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.390195742s)
	I1208 01:42:35.601901 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.371485682s)
	I1208 01:42:35.760052 1027543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.170307253s)
	I1208 01:42:35.760270 1027543 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.932012992s)
	I1208 01:42:35.760364 1027543 api_server.go:72] duration metric: took 6.015943154s to wait for apiserver process to appear ...
	I1208 01:42:35.760423 1027543 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:42:35.760458 1027543 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:42:35.763126 1027543 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-172173 addons enable metrics-server
	
	I1208 01:42:35.766465 1027543 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1208 01:42:35.769441 1027543 addons.go:530] duration metric: took 6.024685077s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1208 01:42:35.769478 1027543 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:42:35.770658 1027543 api_server.go:141] control plane version: v1.34.2
	I1208 01:42:35.770746 1027543 api_server.go:131] duration metric: took 10.299168ms to wait for apiserver health ...
	I1208 01:42:35.770771 1027543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:42:35.777278 1027543 system_pods.go:59] 8 kube-system pods found
	I1208 01:42:35.777322 1027543 system_pods.go:61] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:42:35.777332 1027543 system_pods.go:61] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:42:35.777339 1027543 system_pods.go:61] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:42:35.777346 1027543 system_pods.go:61] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:42:35.777352 1027543 system_pods.go:61] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:42:35.777356 1027543 system_pods.go:61] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:42:35.777362 1027543 system_pods.go:61] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:42:35.777366 1027543 system_pods.go:61] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Running
	I1208 01:42:35.777372 1027543 system_pods.go:74] duration metric: took 6.582917ms to wait for pod list to return data ...
	I1208 01:42:35.777379 1027543 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:42:35.783847 1027543 default_sa.go:45] found service account: "default"
	I1208 01:42:35.783915 1027543 default_sa.go:55] duration metric: took 6.529336ms for default service account to be created ...
	I1208 01:42:35.783940 1027543 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:42:35.788069 1027543 system_pods.go:86] 8 kube-system pods found
	I1208 01:42:35.788157 1027543 system_pods.go:89] "coredns-66bc5c9577-x7llx" [73d30228-444d-42fc-86ac-c92316e96519] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:42:35.788182 1027543 system_pods.go:89] "etcd-embed-certs-172173" [12390949-5f9e-40df-9b36-465ad43beff9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:42:35.788229 1027543 system_pods.go:89] "kindnet-4vjcm" [31a4531e-5dcf-496e-8724-99c58d72d582] Running
	I1208 01:42:35.788256 1027543 system_pods.go:89] "kube-apiserver-embed-certs-172173" [dbcce6a2-3478-46db-99b9-8f442b35a479] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:42:35.788282 1027543 system_pods.go:89] "kube-controller-manager-embed-certs-172173" [cb996c0e-5ca9-428b-8733-5132b397f836] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:42:35.788314 1027543 system_pods.go:89] "kube-proxy-9sc27" [cc6e0d94-5099-42d5-8c6f-fd2e7d912354] Running
	I1208 01:42:35.788340 1027543 system_pods.go:89] "kube-scheduler-embed-certs-172173" [57fdd763-2b53-4dc7-a3e3-9072de50ecce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:42:35.788361 1027543 system_pods.go:89] "storage-provisioner" [72ef2628-cf43-451a-a11e-b9657a269b7a] Running
	I1208 01:42:35.788396 1027543 system_pods.go:126] duration metric: took 4.435668ms to wait for k8s-apps to be running ...
	I1208 01:42:35.788420 1027543 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:42:35.788510 1027543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:42:35.805985 1027543 system_svc.go:56] duration metric: took 17.555019ms WaitForService to wait for kubelet
	I1208 01:42:35.806059 1027543 kubeadm.go:587] duration metric: took 6.061636671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:42:35.806091 1027543 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:42:35.816639 1027543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:42:35.816724 1027543 node_conditions.go:123] node cpu capacity is 2
	I1208 01:42:35.816751 1027543 node_conditions.go:105] duration metric: took 10.642687ms to run NodePressure ...
	I1208 01:42:35.816777 1027543 start.go:242] waiting for startup goroutines ...
	I1208 01:42:35.816818 1027543 start.go:247] waiting for cluster config update ...
	I1208 01:42:35.816842 1027543 start.go:256] writing updated cluster config ...
	I1208 01:42:35.817197 1027543 ssh_runner.go:195] Run: rm -f paused
	I1208 01:42:35.821305 1027543 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:42:35.827083 1027543 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 01:42:37.870797 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:40.333162 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:42.338946 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:44.834039 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:46.835576 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:49.332637 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:51.833329 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:54.332448 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:56.332857 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:42:58.832919 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:00.833400 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:03.332948 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:05.344161 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:07.833155 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	W1208 01:43:10.333937 1027543 pod_ready.go:104] pod "coredns-66bc5c9577-x7llx" is not "Ready", error: <nil>
	I1208 01:43:10.832192 1027543 pod_ready.go:94] pod "coredns-66bc5c9577-x7llx" is "Ready"
	I1208 01:43:10.832224 1027543 pod_ready.go:86] duration metric: took 35.005064052s for pod "coredns-66bc5c9577-x7llx" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.835051 1027543 pod_ready.go:83] waiting for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.839841 1027543 pod_ready.go:94] pod "etcd-embed-certs-172173" is "Ready"
	I1208 01:43:10.839864 1027543 pod_ready.go:86] duration metric: took 4.783257ms for pod "etcd-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.842048 1027543 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.847048 1027543 pod_ready.go:94] pod "kube-apiserver-embed-certs-172173" is "Ready"
	I1208 01:43:10.847079 1027543 pod_ready.go:86] duration metric: took 4.956117ms for pod "kube-apiserver-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:10.849658 1027543 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.031262 1027543 pod_ready.go:94] pod "kube-controller-manager-embed-certs-172173" is "Ready"
	I1208 01:43:11.031291 1027543 pod_ready.go:86] duration metric: took 181.609242ms for pod "kube-controller-manager-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.230764 1027543 pod_ready.go:83] waiting for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.630611 1027543 pod_ready.go:94] pod "kube-proxy-9sc27" is "Ready"
	I1208 01:43:11.630684 1027543 pod_ready.go:86] duration metric: took 399.891989ms for pod "kube-proxy-9sc27" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:11.830716 1027543 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:12.231316 1027543 pod_ready.go:94] pod "kube-scheduler-embed-certs-172173" is "Ready"
	I1208 01:43:12.231348 1027543 pod_ready.go:86] duration metric: took 400.563783ms for pod "kube-scheduler-embed-certs-172173" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:43:12.231366 1027543 pod_ready.go:40] duration metric: took 36.409980475s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:43:12.295811 1027543 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:43:12.298798 1027543 out.go:179] * Done! kubectl is now configured to use "embed-certs-172173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.320009219Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=394b1dab-32d1-4352-abbf-f2f8f75f8b8e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.321315186Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bb09c5ba-1e8c-4e0b-8646-e9e6836f9966 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.321564174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.333881415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.334072293Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b987fed4063f950a290f89834efd11834e414efdc2d239c3a9d66f74df93b714/merged/etc/passwd: no such file or directory"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.334095513Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b987fed4063f950a290f89834efd11834e414efdc2d239c3a9d66f74df93b714/merged/etc/group: no such file or directory"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.33439246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.35131301Z" level=info msg="Created container 8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8: kube-system/storage-provisioner/storage-provisioner" id=bb09c5ba-1e8c-4e0b-8646-e9e6836f9966 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.352045038Z" level=info msg="Starting container: 8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8" id=0fcbe443-7464-47c1-a997-e6767d455ca0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:43:05 embed-certs-172173 crio[656]: time="2025-12-08T01:43:05.355926329Z" level=info msg="Started container" PID=1652 containerID=8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8 description=kube-system/storage-provisioner/storage-provisioner id=0fcbe443-7464-47c1-a997-e6767d455ca0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=727331193ba3bd5d5525b3878f094e7ba6d36ca41ba79fed78ee4c104f4e6869
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.023982579Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.029330396Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.02937116Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.029398171Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033534078Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033595305Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.033704533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037758445Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037808373Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.037836378Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041666024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041853415Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.041897773Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.045585904Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:43:15 embed-certs-172173 crio[656]: time="2025-12-08T01:43:15.045626241Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c7177987e083       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   727331193ba3b       storage-provisioner                          kube-system
	d7d4e2bf67ee0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   db507576aad3f       dashboard-metrics-scraper-6ffb444bf9-zb6wr   kubernetes-dashboard
	e21f994700d9e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   076c8598a8942       kubernetes-dashboard-855c9754f9-2jsh6        kubernetes-dashboard
	b682d39444ea6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago      Running             coredns                     1                   04b72b3ef85a3       coredns-66bc5c9577-x7llx                     kube-system
	68b628b6ac6bf       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   8bb9b3b870a1a       busybox                                      default
	d19a6ad4b22d7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago      Exited              storage-provisioner         1                   727331193ba3b       storage-provisioner                          kube-system
	6f096903b99a9       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           54 seconds ago      Running             kube-proxy                  1                   717320e7d01f4       kube-proxy-9sc27                             kube-system
	6641e3ed8e2f1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago      Running             kindnet-cni                 1                   5fb6322c197c9       kindnet-4vjcm                                kube-system
	30a6f430bd90b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           59 seconds ago      Running             etcd                        1                   c5205db681042       etcd-embed-certs-172173                      kube-system
	2a7068b0310cc       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           59 seconds ago      Running             kube-scheduler              1                   a35e32b8e9a38       kube-scheduler-embed-certs-172173            kube-system
	930c0199e7896       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           59 seconds ago      Running             kube-controller-manager     1                   0bb7dff42c6dc       kube-controller-manager-embed-certs-172173   kube-system
	145d7ece2a98f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           59 seconds ago      Running             kube-apiserver              1                   15db2451d658a       kube-apiserver-embed-certs-172173            kube-system
	
	
	==> coredns [b682d39444ea6773ce3b8e0d3577008255ec29b6210430973c6747b5762dd436] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48376 - 11352 "HINFO IN 5378690169054043430.7190237059136864101. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02247825s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-172173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-172173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-172173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-172173
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:43:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:40:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:43:05 +0000   Mon, 08 Dec 2025 01:41:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-172173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                da830dae-e898-43b3-845a-5a58d5a8ce98
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-x7llx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-172173                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-4vjcm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-172173             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-172173    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-9sc27                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-172173             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zb6wr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2jsh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-172173 event: Registered Node embed-certs-172173 in Controller
	  Normal   NodeReady                95s                    kubelet          Node embed-certs-172173 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-172173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-172173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-172173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-172173 event: Registered Node embed-certs-172173 in Controller
	
	
	==> dmesg <==
	[Dec 8 01:06] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [30a6f430bd90bb0e784d27003f7c76a6d6f8eb4a3ee4c253ed4639b61da6174c] <==
	{"level":"warn","ts":"2025-12-08T01:42:32.272953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.298916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.327730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.347451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.363842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.375216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.400291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.416330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.457501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.468635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.488395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.511398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.538454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.557244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.584048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.601932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.618103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.634611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.650928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.671188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.692220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.715634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.731404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.751604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:42:32.843005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48042","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:43:29 up  6:25,  0 user,  load average: 1.96, 2.63, 2.27
	Linux embed-certs-172173 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6641e3ed8e2f10cf6919f6f483e1e3a5ab0add2852e4ecb950a0589b351defff] <==
	I1208 01:42:34.820584       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:42:34.820812       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:42:34.820947       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:42:34.820959       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:42:34.820968       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:42:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:42:35.022012       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:42:35.022034       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:42:35.022043       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:42:35.022358       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:43:05.021802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:43:05.021967       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:43:05.022779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:43:05.022785       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:43:06.324454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:43:06.324483       1 metrics.go:72] Registering metrics
	I1208 01:43:06.324578       1 controller.go:711] "Syncing nftables rules"
	I1208 01:43:15.023550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:43:15.023672       1 main.go:301] handling current node
	I1208 01:43:25.027746       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:43:25.027777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [145d7ece2a98fbd805f8dc4757b5d3ba2b59855339b8d7b43f11dce6d8ce759f] <==
	I1208 01:42:34.051789       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:42:33.709457       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1208 01:42:34.052158       1 shared_informer.go:349] "Waiting for caches to sync" controller="crd-autoregister"
	I1208 01:42:34.052165       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 01:42:33.709549       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1208 01:42:34.052254       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1208 01:42:34.052269       1 aggregator.go:171] initial CRD sync complete...
	I1208 01:42:34.052277       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 01:42:34.052284       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:42:34.052290       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:42:34.094688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:42:34.122888       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:42:34.159415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:42:34.159483       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:42:34.249043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:42:34.586520       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:42:35.172418       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 01:42:35.354616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:42:35.528413       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:42:35.646540       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:42:35.739882       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.237.16"}
	I1208 01:42:35.753490       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.164.200"}
	I1208 01:42:37.389956       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1208 01:42:37.645937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:42:37.802552       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [930c0199e78964ae17dca15f3099c8b96087b69d6c5ce17e8fcc4f6cd473915c] <==
	I1208 01:42:37.201230       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:42:37.201605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:42:37.204604       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:42:37.206798       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1208 01:42:37.211056       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:42:37.213200       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 01:42:37.213295       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1208 01:42:37.216553       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1208 01:42:37.217779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:42:37.219938       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:42:37.220097       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 01:42:37.221194       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:42:37.223420       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1208 01:42:37.224564       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:42:37.231238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:42:37.231625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 01:42:37.234026       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1208 01:42:37.234270       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1208 01:42:37.234300       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1208 01:42:37.234416       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:42:37.234443       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:42:37.237689       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:42:37.238815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1208 01:42:37.245119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:42:37.247338       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [6f096903b99a9c1b69027720920a825e2c591255e41e2782cc01568d8f8e3a7d] <==
	I1208 01:42:35.179050       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:42:35.376437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:42:35.576362       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:42:35.576397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:42:35.576470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:42:35.657734       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:42:35.657792       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:42:35.662804       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:42:35.663227       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:42:35.663454       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:42:35.664715       1 config.go:200] "Starting service config controller"
	I1208 01:42:35.664797       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:42:35.664840       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:42:35.664868       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:42:35.664919       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:42:35.664947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:42:35.665640       1 config.go:309] "Starting node config controller"
	I1208 01:42:35.665704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:42:35.665736       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:42:35.766936       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:42:35.766973       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:42:35.767013       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a7068b0310ccf86fa0fb6f658593fd47dc138cfa94f10ec2b1def34ce5aa74b] <==
	I1208 01:42:33.155827       1 serving.go:386] Generated self-signed cert in-memory
	I1208 01:42:35.520067       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:42:35.520108       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:42:35.553207       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1208 01:42:35.553264       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1208 01:42:35.553297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:42:35.553303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:42:35.553314       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.553321       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.553642       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:42:35.553725       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:42:35.653664       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:42:35.653785       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1208 01:42:35.653805       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: I1208 01:42:37.676274     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6edf4e48-32d5-4897-93e2-da7c7ebc4886-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zb6wr\" (UID: \"6edf4e48-32d5-4897-93e2-da7c7ebc4886\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr"
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: W1208 01:42:37.902828     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/crio-db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c WatchSource:0}: Error finding container db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c: Status 404 returned error can't find the container with id db507576aad3fa1c12f46017e1cc794f4c1f70a2e6635c5ce90e5a7ec632a58c
	Dec 08 01:42:37 embed-certs-172173 kubelet[783]: W1208 01:42:37.920742     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f1be8b9f8b5a848c15c872e8382028f45c157ca5e6a0e60a890a6eb8e3ddf5c/crio-076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42 WatchSource:0}: Error finding container 076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42: Status 404 returned error can't find the container with id 076c8598a894226a985008455be2b8c77dac547f532e41eeced1fd98eb969a42
	Dec 08 01:42:40 embed-certs-172173 kubelet[783]: I1208 01:42:40.578652     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:42:42 embed-certs-172173 kubelet[783]: I1208 01:42:42.230554     783 scope.go:117] "RemoveContainer" containerID="cdddcec9dff34c30858dc41367be7b703af5842af23c978f1eecf5b713d90ec7"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: I1208 01:42:43.234507     783 scope.go:117] "RemoveContainer" containerID="cdddcec9dff34c30858dc41367be7b703af5842af23c978f1eecf5b713d90ec7"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: I1208 01:42:43.234829     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:43 embed-certs-172173 kubelet[783]: E1208 01:42:43.235001     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:44 embed-certs-172173 kubelet[783]: I1208 01:42:44.240149     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:44 embed-certs-172173 kubelet[783]: E1208 01:42:44.240925     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:46 embed-certs-172173 kubelet[783]: I1208 01:42:46.724386     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:42:46 embed-certs-172173 kubelet[783]: E1208 01:42:46.725337     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:42:47 embed-certs-172173 kubelet[783]: I1208 01:42:47.371973     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2jsh6" podStartSLOduration=1.805123984 podStartE2EDuration="10.37195324s" podCreationTimestamp="2025-12-08 01:42:37 +0000 UTC" firstStartedPulling="2025-12-08 01:42:37.927523875 +0000 UTC m=+8.995181326" lastFinishedPulling="2025-12-08 01:42:46.494353123 +0000 UTC m=+17.562010582" observedRunningTime="2025-12-08 01:42:47.27959668 +0000 UTC m=+18.347254131" watchObservedRunningTime="2025-12-08 01:42:47.37195324 +0000 UTC m=+18.439610699"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.142000     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.306550     783 scope.go:117] "RemoveContainer" containerID="e255f86277520ac4d1c310398bccce7d1d67ac0e8e644936f517d499fcbcddae"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: I1208 01:43:02.306819     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:02 embed-certs-172173 kubelet[783]: E1208 01:43:02.307069     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:05 embed-certs-172173 kubelet[783]: I1208 01:43:05.317188     783 scope.go:117] "RemoveContainer" containerID="d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6"
	Dec 08 01:43:06 embed-certs-172173 kubelet[783]: I1208 01:43:06.725136     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:06 embed-certs-172173 kubelet[783]: E1208 01:43:06.725311     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:19 embed-certs-172173 kubelet[783]: I1208 01:43:19.141391     783 scope.go:117] "RemoveContainer" containerID="d7d4e2bf67ee0b89097b73d26835f88068f8b36eb93d88bc301c3fd1a8a2a652"
	Dec 08 01:43:19 embed-certs-172173 kubelet[783]: E1208 01:43:19.142963     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zb6wr_kubernetes-dashboard(6edf4e48-32d5-4897-93e2-da7c7ebc4886)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zb6wr" podUID="6edf4e48-32d5-4897-93e2-da7c7ebc4886"
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:43:24 embed-certs-172173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e21f994700d9e791b62898f6acd0332a72ea0bee77c1b9ef572c4eb21df2040c] <==
	2025/12/08 01:42:46 Using namespace: kubernetes-dashboard
	2025/12/08 01:42:46 Using in-cluster config to connect to apiserver
	2025/12/08 01:42:46 Using secret token for csrf signing
	2025/12/08 01:42:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:42:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:42:46 Successful initial request to the apiserver, version: v1.34.2
	2025/12/08 01:42:46 Generating JWE encryption key
	2025/12/08 01:42:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:42:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:42:46 Initializing JWE encryption key from synchronized object
	2025/12/08 01:42:46 Creating in-cluster Sidecar client
	2025/12/08 01:42:47 Serving insecurely on HTTP port: 9090
	2025/12/08 01:42:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:43:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:42:46 Starting overwatch
	
	
	==> storage-provisioner [8c7177987e08375b1e3abd52b78b9a7540bad0080eb9af00b68492f25e9437f8] <==
	I1208 01:43:05.371214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:43:05.385178       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:43:05.385693       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:43:05.388279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:08.843115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:13.103514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:16.702112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:19.756672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.779661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.790524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:43:22.790697       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:43:22.790893       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d!
	I1208 01:43:22.791500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"caea3a6f-ace3-471c-929e-48c4db7a6e04", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d became leader
	W1208 01:43:22.801775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:22.812358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:43:22.891856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-172173_17432047-30d0-42e6-9bb2-bc9a1edcae3d!
	W1208 01:43:24.815083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:24.821343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:26.827229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:26.832881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:28.836536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:43:28.843676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d19a6ad4b22d756fb5a5022c280de9ac94b867e7b8aa69c1abd42018f609fae6] <==
	I1208 01:42:34.917121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:43:04.949249       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-172173 -n embed-certs-172173
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-172173 -n embed-certs-172173: exit status 2 (383.977099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-172173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.357077ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:45:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993283 describe deploy/metrics-server -n kube-system: exit status 1 (90.482433ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-993283 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993283
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993283:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	        "Created": "2025-12-08T01:43:38.262395986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1032114,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:43:38.327646292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hosts",
	        "LogPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505-json.log",
	        "Name": "/default-k8s-diff-port-993283",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993283:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-993283",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	                "LowerDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993283",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993283/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993283",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24e9c024208a2c94ad6a5207d93096551c49b6990c700c5cbc52e92bb8d2d0cf",
	            "SandboxKey": "/var/run/docker/netns/24e9c024208a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993283": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:38:13:43:61:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b08c231373a28dfebdc786db1bd7305a935d3afbb9f365148f132a530c3640",
	                    "EndpointID": "d697e01fc2dfc8dd66123800b2f5154871f1f0b2738e77258eda9a2e7a3c0358",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993283",
	                        "9cfbb32a7825"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25: (1.261369879s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-489608                                                                                                                                                                                                                        │ cert-options-489608          │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:37 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:37 UTC │ 08 Dec 25 01:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-661561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │                     │
	│ stop    │ -p old-k8s-version-661561 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                   │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                               │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:43:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:43:33.213832 1031688 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:33.213959 1031688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:33.213978 1031688 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:33.213984 1031688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:33.214241 1031688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:43:33.214674 1031688 out.go:368] Setting JSON to false
	I1208 01:43:33.215615 1031688 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23146,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:43:33.215687 1031688 start.go:143] virtualization:  
	I1208 01:43:33.219321 1031688 out.go:179] * [default-k8s-diff-port-993283] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:33.222433 1031688 notify.go:221] Checking for updates...
	I1208 01:43:33.223002 1031688 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:33.226015 1031688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:33.229524 1031688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:43:33.232582 1031688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:43:33.235464 1031688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:33.238236 1031688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:33.241625 1031688 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:33.241786 1031688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:33.284347 1031688 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:33.284464 1031688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:33.360409 1031688 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:33.351370232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:33.360511 1031688 docker.go:319] overlay module found
	I1208 01:43:33.363610 1031688 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:33.366504 1031688 start.go:309] selected driver: docker
	I1208 01:43:33.366519 1031688 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:33.366531 1031688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:33.367286 1031688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:33.427791 1031688 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:33.418713378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:33.427956 1031688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:43:33.428167 1031688 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:43:33.431314 1031688 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:33.434194 1031688 cni.go:84] Creating CNI manager for ""
	I1208 01:43:33.434281 1031688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:43:33.434295 1031688 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:33.434377 1031688 start.go:353] cluster config:
	{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:33.437515 1031688 out.go:179] * Starting "default-k8s-diff-port-993283" primary control-plane node in "default-k8s-diff-port-993283" cluster
	I1208 01:43:33.440255 1031688 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:43:33.443165 1031688 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:33.445989 1031688 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:43:33.446041 1031688 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:43:33.446052 1031688 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:33.446136 1031688 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:33.446150 1031688 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:43:33.446262 1031688 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:43:33.446285 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json: {Name:mk190fc81006e086d6fdf0d6461638711f4d17e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:33.446443 1031688 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:33.465052 1031688 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:33.465074 1031688 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:33.465089 1031688 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:33.465121 1031688 start.go:360] acquireMachinesLock for default-k8s-diff-port-993283: {Name:mk8568f2bc3d9295af85055d5f2cebcc44a030bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:33.465222 1031688 start.go:364] duration metric: took 80.887µs to acquireMachinesLock for "default-k8s-diff-port-993283"
	I1208 01:43:33.465252 1031688 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:43:33.465321 1031688 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:33.468689 1031688 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:33.468943 1031688 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993283" (driver="docker")
	I1208 01:43:33.468980 1031688 client.go:173] LocalClient.Create starting
	I1208 01:43:33.469096 1031688 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:43:33.469134 1031688 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:33.469153 1031688 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:33.469205 1031688 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:43:33.469232 1031688 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:33.469244 1031688 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:33.469601 1031688 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993283 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:33.486042 1031688 cli_runner.go:211] docker network inspect default-k8s-diff-port-993283 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:33.486120 1031688 network_create.go:284] running [docker network inspect default-k8s-diff-port-993283] to gather additional debugging logs...
	I1208 01:43:33.486154 1031688 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993283
	W1208 01:43:33.501173 1031688 cli_runner.go:211] docker network inspect default-k8s-diff-port-993283 returned with exit code 1
	I1208 01:43:33.501212 1031688 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993283]: docker network inspect default-k8s-diff-port-993283: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993283 not found
	I1208 01:43:33.501228 1031688 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993283]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993283 not found
	
	** /stderr **
	I1208 01:43:33.501333 1031688 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:33.517240 1031688 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:43:33.517584 1031688 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:43:33.517922 1031688 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:43:33.518183 1031688 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:43:33.518601 1031688 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0fd40}
	I1208 01:43:33.518628 1031688 network_create.go:124] attempt to create docker network default-k8s-diff-port-993283 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:43:33.518687 1031688 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993283 default-k8s-diff-port-993283
	I1208 01:43:33.575384 1031688 network_create.go:108] docker network default-k8s-diff-port-993283 192.168.85.0/24 created
	I1208 01:43:33.575422 1031688 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-993283" container
	I1208 01:43:33.575513 1031688 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:33.591278 1031688 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993283 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993283 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:33.608862 1031688 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993283
	I1208 01:43:33.608962 1031688 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993283-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993283 --entrypoint /usr/bin/test -v default-k8s-diff-port-993283:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:34.107403 1031688 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993283
	I1208 01:43:34.107470 1031688 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:43:34.107485 1031688 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:34.107568 1031688 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993283:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:38.179633 1031688 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993283:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.072001039s)
	I1208 01:43:38.179666 1031688 kic.go:203] duration metric: took 4.072177599s to extract preloaded images to volume ...
	W1208 01:43:38.179806 1031688 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:38.179961 1031688 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:38.247048 1031688 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993283 --name default-k8s-diff-port-993283 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993283 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993283 --network default-k8s-diff-port-993283 --ip 192.168.85.2 --volume default-k8s-diff-port-993283:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:38.550437 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Running}}
	I1208 01:43:38.575671 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:43:38.601862 1031688 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993283 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:38.657442 1031688 oci.go:144] the created container "default-k8s-diff-port-993283" has a running status.
	I1208 01:43:38.657478 1031688 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa...
	I1208 01:43:38.861634 1031688 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:38.902134 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:43:38.936727 1031688 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:38.936746 1031688 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993283 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:39.010965 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:43:39.040546 1031688 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:39.040635 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:39.076585 1031688 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:39.076927 1031688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1208 01:43:39.076942 1031688 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:39.077597 1031688 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47802->127.0.0.1:33797: read: connection reset by peer
	I1208 01:43:42.243589 1031688 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:43:42.243618 1031688 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993283"
	I1208 01:43:42.243693 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:42.276149 1031688 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:42.276566 1031688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1208 01:43:42.276583 1031688 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993283 && echo "default-k8s-diff-port-993283" | sudo tee /etc/hostname
	I1208 01:43:42.440602 1031688 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:43:42.440699 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:42.458154 1031688 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:42.458487 1031688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1208 01:43:42.458512 1031688 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993283/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:42.615323 1031688 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:42.615349 1031688 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:43:42.615373 1031688 ubuntu.go:190] setting up certificates
	I1208 01:43:42.615390 1031688 provision.go:84] configureAuth start
	I1208 01:43:42.615448 1031688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:43:42.633004 1031688 provision.go:143] copyHostCerts
	I1208 01:43:42.633086 1031688 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:43:42.633101 1031688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:43:42.633179 1031688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:43:42.633275 1031688 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:43:42.633283 1031688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:43:42.633312 1031688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:43:42.633368 1031688 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:43:42.633376 1031688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:43:42.633400 1031688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:43:42.633451 1031688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993283 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-993283 localhost minikube]
	I1208 01:43:42.973184 1031688 provision.go:177] copyRemoteCerts
	I1208 01:43:42.973254 1031688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:42.973295 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:42.989684 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:43:43.103024 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:43.121679 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1208 01:43:43.140124 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:43.157574 1031688 provision.go:87] duration metric: took 542.170053ms to configureAuth
	I1208 01:43:43.157619 1031688 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:43.157808 1031688 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:43:43.157920 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:43.175057 1031688 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:43.175369 1031688 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1208 01:43:43.175387 1031688 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:43:43.494207 1031688 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:43:43.494230 1031688 machine.go:97] duration metric: took 4.453664498s to provisionDockerMachine
	I1208 01:43:43.494241 1031688 client.go:176] duration metric: took 10.025251018s to LocalClient.Create
	I1208 01:43:43.494258 1031688 start.go:167] duration metric: took 10.025317858s to libmachine.API.Create "default-k8s-diff-port-993283"
	I1208 01:43:43.494269 1031688 start.go:293] postStartSetup for "default-k8s-diff-port-993283" (driver="docker")
	I1208 01:43:43.494282 1031688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:43.494365 1031688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:43.494413 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:43.511959 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:43:43.619333 1031688 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:43.622782 1031688 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:43.622814 1031688 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:43.622825 1031688 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:43:43.622913 1031688 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:43:43.623003 1031688 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:43:43.623116 1031688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:43.630663 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:43:43.649517 1031688 start.go:296] duration metric: took 155.230476ms for postStartSetup
	I1208 01:43:43.649899 1031688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:43:43.666439 1031688 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:43:43.666736 1031688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:43.666803 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:43.683347 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:43:43.788351 1031688 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:43.793726 1031688 start.go:128] duration metric: took 10.328391077s to createHost
	I1208 01:43:43.793753 1031688 start.go:83] releasing machines lock for "default-k8s-diff-port-993283", held for 10.328517488s
	I1208 01:43:43.793825 1031688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:43:43.815138 1031688 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:43.815193 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:43.815469 1031688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:43.815537 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:43:43.843375 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:43:43.850999 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:43:43.946387 1031688 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:44.038437 1031688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:43:44.074357 1031688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:44.079288 1031688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:44.079382 1031688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:44.108230 1031688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:44.108257 1031688 start.go:496] detecting cgroup driver to use...
	I1208 01:43:44.108290 1031688 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:44.108354 1031688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:43:44.126415 1031688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:43:44.139826 1031688 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:44.139893 1031688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:44.158285 1031688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:44.177078 1031688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:44.301202 1031688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:44.426310 1031688 docker.go:234] disabling docker service ...
	I1208 01:43:44.426396 1031688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:44.449420 1031688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:44.463771 1031688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:44.591556 1031688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:44.711240 1031688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:44.723877 1031688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:44.737671 1031688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:43:44.737745 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.747403 1031688 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:43:44.747541 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.756014 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.764496 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.773338 1031688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:44.781574 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.790496 1031688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.803821 1031688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:43:44.812578 1031688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:44.820348 1031688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:44.828047 1031688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:44.941169 1031688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:43:45.168040 1031688 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:43:45.168134 1031688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:43:45.173669 1031688 start.go:564] Will wait 60s for crictl version
	I1208 01:43:45.173832 1031688 ssh_runner.go:195] Run: which crictl
	I1208 01:43:45.179011 1031688 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:45.241778 1031688 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:43:45.241938 1031688 ssh_runner.go:195] Run: crio --version
	I1208 01:43:45.303503 1031688 ssh_runner.go:195] Run: crio --version
	I1208 01:43:45.344094 1031688 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:43:45.347310 1031688 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993283 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:45.365556 1031688 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:45.369598 1031688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:45.379844 1031688 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:45.379971 1031688 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:43:45.380034 1031688 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:45.412114 1031688 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:43:45.412139 1031688 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:43:45.412198 1031688 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:45.437885 1031688 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:43:45.437909 1031688 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:45.437918 1031688 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1208 01:43:45.438011 1031688 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:45.438105 1031688 ssh_runner.go:195] Run: crio config
	I1208 01:43:45.492471 1031688 cni.go:84] Creating CNI manager for ""
	I1208 01:43:45.492500 1031688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:43:45.492521 1031688 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:43:45.492544 1031688 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993283 NodeName:default-k8s-diff-port-993283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:45.492670 1031688 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993283"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:45.492751 1031688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:43:45.500280 1031688 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:45.500402 1031688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:45.508089 1031688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1208 01:43:45.521179 1031688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:43:45.534722 1031688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1208 01:43:45.548250 1031688 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:45.551926 1031688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:45.561714 1031688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:45.684539 1031688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:45.707411 1031688 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283 for IP: 192.168.85.2
	I1208 01:43:45.707432 1031688 certs.go:195] generating shared ca certs ...
	I1208 01:43:45.707463 1031688 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:45.707646 1031688 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:43:45.707693 1031688 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:43:45.707705 1031688 certs.go:257] generating profile certs ...
	I1208 01:43:45.707781 1031688 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.key
	I1208 01:43:45.707809 1031688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt with IP's: []
	I1208 01:43:46.289516 1031688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt ...
	I1208 01:43:46.289555 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: {Name:mk6fab38b39b6018a8b107672158feed7a9288e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.289755 1031688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.key ...
	I1208 01:43:46.289771 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.key: {Name:mkd387ea0a0517e07410af0ee7209c599b5589c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.289871 1031688 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1
	I1208 01:43:46.289890 1031688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt.42acf7b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:43:46.662138 1031688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt.42acf7b1 ...
	I1208 01:43:46.662176 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt.42acf7b1: {Name:mk6f016eef4b093d0e93f710b670733e9d0a9edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.662380 1031688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1 ...
	I1208 01:43:46.662403 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1: {Name:mkca9b703437b2b2fa25fe6281ae9c4696548b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.662489 1031688 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt.42acf7b1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt
	I1208 01:43:46.662576 1031688 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key
	I1208 01:43:46.662643 1031688 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key
	I1208 01:43:46.662666 1031688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt with IP's: []
	I1208 01:43:46.810254 1031688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt ...
	I1208 01:43:46.810285 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt: {Name:mk41588602fecf7ce246e030d5bcd2ddfe27cc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.810449 1031688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key ...
	I1208 01:43:46.810466 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key: {Name:mkebfd43743b0a4b99a82d877f8bf5c2e38201b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:46.810650 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:43:46.810698 1031688 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:46.810712 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:43:46.810744 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:46.810787 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:46.810825 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:43:46.810890 1031688 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:43:46.811489 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:46.829940 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:46.848317 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:46.866661 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:46.885373 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1208 01:43:46.903253 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:46.920294 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:46.938397 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:46.956088 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:43:46.973445 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:46.992298 1031688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:43:47.012636 1031688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:47.026695 1031688 ssh_runner.go:195] Run: openssl version
	I1208 01:43:47.033398 1031688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:43:47.041171 1031688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:43:47.048972 1031688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:43:47.052926 1031688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:43:47.053036 1031688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:43:47.094432 1031688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:47.102065 1031688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:43:47.109211 1031688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:43:47.116658 1031688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:43:47.124300 1031688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:43:47.128066 1031688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:43:47.128146 1031688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:43:47.169202 1031688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:47.177032 1031688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:47.184571 1031688 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:47.192136 1031688 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:47.199413 1031688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:47.202956 1031688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:47.203020 1031688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:47.244887 1031688 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:47.252342 1031688 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:47.265004 1031688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:47.269574 1031688 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:47.269672 1031688 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:47.269815 1031688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:47.269922 1031688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:47.300297 1031688 cri.go:89] found id: ""
	I1208 01:43:47.300416 1031688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:47.309416 1031688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:47.318175 1031688 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:47.318286 1031688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:47.326302 1031688 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:47.326331 1031688 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:47.326383 1031688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1208 01:43:47.334251 1031688 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:47.334319 1031688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:47.341763 1031688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1208 01:43:47.349244 1031688 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:47.349359 1031688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:47.356772 1031688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1208 01:43:47.364306 1031688 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:47.364398 1031688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:47.371908 1031688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1208 01:43:47.379684 1031688 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:47.379778 1031688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:47.386959 1031688 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:47.428228 1031688 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 01:43:47.428608 1031688 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:47.453626 1031688 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:47.453703 1031688 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:47.453747 1031688 kubeadm.go:319] OS: Linux
	I1208 01:43:47.453799 1031688 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:47.453851 1031688 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:47.453902 1031688 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:47.453963 1031688 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:47.454016 1031688 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:47.454073 1031688 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:47.454123 1031688 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:47.454174 1031688 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:47.454224 1031688 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:47.525051 1031688 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:47.525166 1031688 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:47.525261 1031688 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:47.537248 1031688 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:47.544040 1031688 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:47.544142 1031688 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:47.546103 1031688 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:47.715175 1031688 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:47.965212 1031688 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:48.312381 1031688 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:48.612564 1031688 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:48.743252 1031688 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:48.743628 1031688 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993283 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:43:49.290436 1031688 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:49.290818 1031688 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993283 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:43:49.429345 1031688 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:50.350058 1031688 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:50.393759 1031688 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:50.394057 1031688 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:51.321191 1031688 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:51.875331 1031688 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:52.290793 1031688 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:52.703356 1031688 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:52.933201 1031688 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:52.933971 1031688 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:52.936769 1031688 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:52.940629 1031688 out.go:252]   - Booting up control plane ...
	I1208 01:43:52.940738 1031688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:52.940816 1031688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:52.940888 1031688 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:52.958868 1031688 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:52.958984 1031688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:52.966480 1031688 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:52.966919 1031688 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:52.966967 1031688 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:53.104194 1031688 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:53.104326 1031688 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:43:54.608808 1031688 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500868327s
	I1208 01:43:54.608983 1031688 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 01:43:54.609090 1031688 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1208 01:43:54.609211 1031688 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 01:43:54.609304 1031688 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 01:43:57.745025 1031688 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.136374114s
	I1208 01:43:59.365502 1031688 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.756956078s
	I1208 01:44:01.610962 1031688 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002492787s
	I1208 01:44:01.643583 1031688 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 01:44:01.663714 1031688 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 01:44:01.716429 1031688 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 01:44:01.716714 1031688 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-993283 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 01:44:01.732683 1031688 kubeadm.go:319] [bootstrap-token] Using token: 5pgvzm.grken8htkb8jgnol
	I1208 01:44:01.735576 1031688 out.go:252]   - Configuring RBAC rules ...
	I1208 01:44:01.735708 1031688 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 01:44:01.741177 1031688 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 01:44:01.751344 1031688 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 01:44:01.758514 1031688 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 01:44:01.766860 1031688 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 01:44:01.772040 1031688 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 01:44:02.020823 1031688 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 01:44:02.459575 1031688 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 01:44:03.021226 1031688 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 01:44:03.023006 1031688 kubeadm.go:319] 
	I1208 01:44:03.023113 1031688 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 01:44:03.023133 1031688 kubeadm.go:319] 
	I1208 01:44:03.023212 1031688 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 01:44:03.023220 1031688 kubeadm.go:319] 
	I1208 01:44:03.023250 1031688 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 01:44:03.023310 1031688 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 01:44:03.023365 1031688 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 01:44:03.023369 1031688 kubeadm.go:319] 
	I1208 01:44:03.023423 1031688 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 01:44:03.023426 1031688 kubeadm.go:319] 
	I1208 01:44:03.023474 1031688 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 01:44:03.023478 1031688 kubeadm.go:319] 
	I1208 01:44:03.023530 1031688 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 01:44:03.023609 1031688 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 01:44:03.023678 1031688 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 01:44:03.023682 1031688 kubeadm.go:319] 
	I1208 01:44:03.023771 1031688 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 01:44:03.023849 1031688 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 01:44:03.023858 1031688 kubeadm.go:319] 
	I1208 01:44:03.023943 1031688 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 5pgvzm.grken8htkb8jgnol \
	I1208 01:44:03.024047 1031688 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 01:44:03.024067 1031688 kubeadm.go:319] 	--control-plane 
	I1208 01:44:03.024071 1031688 kubeadm.go:319] 
	I1208 01:44:03.024156 1031688 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 01:44:03.024160 1031688 kubeadm.go:319] 
	I1208 01:44:03.024250 1031688 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 5pgvzm.grken8htkb8jgnol \
	I1208 01:44:03.024363 1031688 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 01:44:03.029082 1031688 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 01:44:03.029311 1031688 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:44:03.029426 1031688 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:44:03.029445 1031688 cni.go:84] Creating CNI manager for ""
	I1208 01:44:03.029453 1031688 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:44:03.032715 1031688 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 01:44:03.035636 1031688 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 01:44:03.040763 1031688 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 01:44:03.040805 1031688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 01:44:03.057245 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 01:44:03.371876 1031688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 01:44:03.372012 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:03.372093 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993283 minikube.k8s.io/updated_at=2025_12_08T01_44_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=default-k8s-diff-port-993283 minikube.k8s.io/primary=true
	I1208 01:44:03.540708 1031688 ops.go:34] apiserver oom_adj: -16
	I1208 01:44:03.540826 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:04.040843 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:04.540928 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:05.040862 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:05.540864 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:06.041302 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:06.540860 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:07.041338 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:07.540824 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:08.040839 1031688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:44:08.154108 1031688 kubeadm.go:1114] duration metric: took 4.782147886s to wait for elevateKubeSystemPrivileges
	I1208 01:44:08.154149 1031688 kubeadm.go:403] duration metric: took 20.884483297s to StartCluster
	I1208 01:44:08.154169 1031688 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:44:08.154238 1031688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:44:08.155042 1031688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:44:08.155300 1031688 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:44:08.155416 1031688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 01:44:08.155659 1031688 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:44:08.155702 1031688 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:44:08.155763 1031688 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993283"
	I1208 01:44:08.155783 1031688 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993283"
	I1208 01:44:08.155809 1031688 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:44:08.156426 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:44:08.156583 1031688 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993283"
	I1208 01:44:08.156606 1031688 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993283"
	I1208 01:44:08.156860 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:44:08.159086 1031688 out.go:179] * Verifying Kubernetes components...
	I1208 01:44:08.161934 1031688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:44:08.202713 1031688 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993283"
	I1208 01:44:08.202754 1031688 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:44:08.203219 1031688 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:44:08.204390 1031688 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:44:08.207376 1031688 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:44:08.207405 1031688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:44:08.207466 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:44:08.244292 1031688 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:44:08.244317 1031688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:44:08.244383 1031688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:44:08.255822 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:44:08.272397 1031688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:44:08.482518 1031688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 01:44:08.509679 1031688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:44:08.709983 1031688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:44:08.728875 1031688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:44:09.141639 1031688 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1208 01:44:09.143449 1031688 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:44:09.479654 1031688 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1208 01:44:09.482512 1031688 addons.go:530] duration metric: took 1.326797798s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1208 01:44:09.652058 1031688 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993283" context rescaled to 1 replicas
	W1208 01:44:11.147964 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:13.648132 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:16.148246 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:18.648191 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:21.147749 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:23.148210 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:25.648296 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:28.147582 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:30.647892 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:32.649520 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:35.148742 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:37.647516 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:40.147743 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:42.148419 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:44.648676 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:47.148237 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	W1208 01:44:49.648422 1031688 node_ready.go:57] node "default-k8s-diff-port-993283" has "Ready":"False" status (will retry)
	I1208 01:44:50.152816 1031688 node_ready.go:49] node "default-k8s-diff-port-993283" is "Ready"
	I1208 01:44:50.152848 1031688 node_ready.go:38] duration metric: took 41.008035462s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:44:50.152862 1031688 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:44:50.152924 1031688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:44:50.179556 1031688 api_server.go:72] duration metric: took 42.024211915s to wait for apiserver process to appear ...
	I1208 01:44:50.179586 1031688 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:44:50.179611 1031688 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1208 01:44:50.199401 1031688 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1208 01:44:50.201701 1031688 api_server.go:141] control plane version: v1.34.2
	I1208 01:44:50.201732 1031688 api_server.go:131] duration metric: took 22.13817ms to wait for apiserver health ...
	I1208 01:44:50.201748 1031688 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:44:50.209514 1031688 system_pods.go:59] 8 kube-system pods found
	I1208 01:44:50.209557 1031688 system_pods.go:61] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:44:50.209565 1031688 system_pods.go:61] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running
	I1208 01:44:50.209571 1031688 system_pods.go:61] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running
	I1208 01:44:50.209575 1031688 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running
	I1208 01:44:50.209580 1031688 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running
	I1208 01:44:50.209585 1031688 system_pods.go:61] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running
	I1208 01:44:50.209595 1031688 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running
	I1208 01:44:50.209601 1031688 system_pods.go:61] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:44:50.209610 1031688 system_pods.go:74] duration metric: took 7.855906ms to wait for pod list to return data ...
	I1208 01:44:50.209628 1031688 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:44:50.212489 1031688 default_sa.go:45] found service account: "default"
	I1208 01:44:50.212513 1031688 default_sa.go:55] duration metric: took 2.880023ms for default service account to be created ...
	I1208 01:44:50.212523 1031688 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:44:50.217622 1031688 system_pods.go:86] 8 kube-system pods found
	I1208 01:44:50.217662 1031688 system_pods.go:89] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:44:50.217670 1031688 system_pods.go:89] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running
	I1208 01:44:50.217676 1031688 system_pods.go:89] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running
	I1208 01:44:50.217681 1031688 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running
	I1208 01:44:50.217686 1031688 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running
	I1208 01:44:50.217691 1031688 system_pods.go:89] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running
	I1208 01:44:50.217701 1031688 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running
	I1208 01:44:50.217706 1031688 system_pods.go:89] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:44:50.217744 1031688 retry.go:31] will retry after 266.862032ms: missing components: kube-dns
	I1208 01:44:50.488866 1031688 system_pods.go:86] 8 kube-system pods found
	I1208 01:44:50.488902 1031688 system_pods.go:89] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:44:50.488910 1031688 system_pods.go:89] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running
	I1208 01:44:50.488940 1031688 system_pods.go:89] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running
	I1208 01:44:50.488953 1031688 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running
	I1208 01:44:50.488958 1031688 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running
	I1208 01:44:50.488962 1031688 system_pods.go:89] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running
	I1208 01:44:50.488967 1031688 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running
	I1208 01:44:50.488978 1031688 system_pods.go:89] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:44:50.488994 1031688 retry.go:31] will retry after 320.998588ms: missing components: kube-dns
	I1208 01:44:50.814373 1031688 system_pods.go:86] 8 kube-system pods found
	I1208 01:44:50.814404 1031688 system_pods.go:89] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Running
	I1208 01:44:50.814412 1031688 system_pods.go:89] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running
	I1208 01:44:50.814417 1031688 system_pods.go:89] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running
	I1208 01:44:50.814422 1031688 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running
	I1208 01:44:50.814426 1031688 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running
	I1208 01:44:50.814432 1031688 system_pods.go:89] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running
	I1208 01:44:50.814436 1031688 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running
	I1208 01:44:50.814441 1031688 system_pods.go:89] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Running
	I1208 01:44:50.814449 1031688 system_pods.go:126] duration metric: took 601.919803ms to wait for k8s-apps to be running ...
	I1208 01:44:50.814462 1031688 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:44:50.814522 1031688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:44:50.828807 1031688 system_svc.go:56] duration metric: took 14.335246ms WaitForService to wait for kubelet
	I1208 01:44:50.828834 1031688 kubeadm.go:587] duration metric: took 42.67350073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:44:50.828852 1031688 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:44:50.833018 1031688 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:44:50.833052 1031688 node_conditions.go:123] node cpu capacity is 2
	I1208 01:44:50.833067 1031688 node_conditions.go:105] duration metric: took 4.209613ms to run NodePressure ...
	I1208 01:44:50.833116 1031688 start.go:242] waiting for startup goroutines ...
	I1208 01:44:50.833124 1031688 start.go:247] waiting for cluster config update ...
	I1208 01:44:50.833139 1031688 start.go:256] writing updated cluster config ...
	I1208 01:44:50.833448 1031688 ssh_runner.go:195] Run: rm -f paused
	I1208 01:44:50.837265 1031688 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:44:50.840772 1031688 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.845664 1031688 pod_ready.go:94] pod "coredns-66bc5c9577-rljsm" is "Ready"
	I1208 01:44:50.845692 1031688 pod_ready.go:86] duration metric: took 4.891138ms for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.848174 1031688 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.852728 1031688 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993283" is "Ready"
	I1208 01:44:50.852757 1031688 pod_ready.go:86] duration metric: took 4.558105ms for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.855326 1031688 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.860552 1031688 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993283" is "Ready"
	I1208 01:44:50.860586 1031688 pod_ready.go:86] duration metric: took 5.232041ms for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:50.863297 1031688 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:51.241807 1031688 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993283" is "Ready"
	I1208 01:44:51.241842 1031688 pod_ready.go:86] duration metric: took 378.519072ms for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:51.441291 1031688 pod_ready.go:83] waiting for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:51.841392 1031688 pod_ready.go:94] pod "kube-proxy-5vgcq" is "Ready"
	I1208 01:44:51.841421 1031688 pod_ready.go:86] duration metric: took 400.102102ms for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:52.041734 1031688 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:52.441391 1031688 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993283" is "Ready"
	I1208 01:44:52.441428 1031688 pod_ready.go:86] duration metric: took 399.61289ms for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:44:52.441454 1031688 pod_ready.go:40] duration metric: took 1.604156314s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:44:52.497711 1031688 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:44:52.501197 1031688 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:44:50 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:50.117258374Z" level=info msg="Created container 21dac0ed5743a80d71765eaa8594ac6aac1dee99762d04fd3c219d2235a68b5d: kube-system/coredns-66bc5c9577-rljsm/coredns" id=c35646a4-649d-493d-9af2-767f5d066ded name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:44:50 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:50.118923205Z" level=info msg="Starting container: 21dac0ed5743a80d71765eaa8594ac6aac1dee99762d04fd3c219d2235a68b5d" id=b9dec640-15f3-4585-be80-527d1b99a3b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:44:50 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:50.122082796Z" level=info msg="Started container" PID=1738 containerID=21dac0ed5743a80d71765eaa8594ac6aac1dee99762d04fd3c219d2235a68b5d description=kube-system/coredns-66bc5c9577-rljsm/coredns id=b9dec640-15f3-4585-be80-527d1b99a3b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3fb385bf6c395c4a4aabd95b8b376a3a1df7124d4a983702cfafccfe3a3bd62
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.03261743Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dc18f71e-da88-47e7-9ac1-506fd936c28f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.032689964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.042447634Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671 UID:82b0cf89-8ccd-4661-9916-328846a942d2 NetNS:/var/run/netns/969e53bb-43f8-4258-a236-478d533bacb9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ce58}] Aliases:map[]}"
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.042498498Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.05167034Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671 UID:82b0cf89-8ccd-4661-9916-328846a942d2 NetNS:/var/run/netns/969e53bb-43f8-4258-a236-478d533bacb9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ce58}] Aliases:map[]}"
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.051908792Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.05561156Z" level=info msg="Ran pod sandbox b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671 with infra container: default/busybox/POD" id=dc18f71e-da88-47e7-9ac1-506fd936c28f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.056715228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9377cf92-fb42-44fb-aa0c-5ac97e6480fc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.056841818Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9377cf92-fb42-44fb-aa0c-5ac97e6480fc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.056886249Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9377cf92-fb42-44fb-aa0c-5ac97e6480fc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.057878974Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eb79d3ab-17a6-48bc-9807-7f0375fd29d7 name=/runtime.v1.ImageService/PullImage
	Dec 08 01:44:53 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:53.060485009Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.246964496Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=eb79d3ab-17a6-48bc-9807-7f0375fd29d7 name=/runtime.v1.ImageService/PullImage
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.247605274Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ad077338-ee39-40b5-a26f-750ebd85a4c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.249080465Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9a658a12-3f98-4094-b497-eced0750eef3 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.25557716Z" level=info msg="Creating container: default/busybox/busybox" id=85483b55-3360-4f74-b52d-a1b8141d5997 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.255707869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.260263349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.260714628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.276382073Z" level=info msg="Created container fffb26a8829654a3e6a819f35451bf066b143ff671a62b3062e065dd62f31636: default/busybox/busybox" id=85483b55-3360-4f74-b52d-a1b8141d5997 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.277107947Z" level=info msg="Starting container: fffb26a8829654a3e6a819f35451bf066b143ff671a62b3062e065dd62f31636" id=bb976386-a102-464a-8134-7b16f32f0adc name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:44:55 default-k8s-diff-port-993283 crio[836]: time="2025-12-08T01:44:55.278944964Z" level=info msg="Started container" PID=1795 containerID=fffb26a8829654a3e6a819f35451bf066b143ff671a62b3062e065dd62f31636 description=default/busybox/busybox id=bb976386-a102-464a-8134-7b16f32f0adc name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fffb26a882965       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   b9a1c8e8cd99c       busybox                                                default
	21dac0ed5743a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   e3fb385bf6c39       coredns-66bc5c9577-rljsm                               kube-system
	bd49b4fa0655d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   8b90a85f7d94e       storage-provisioner                                    kube-system
	89f8f0b0abb2d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   3edcdbf171900       kindnet-2khbg                                          kube-system
	0887637d98454       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      53 seconds ago       Running             kube-proxy                0                   044c873da8fd5       kube-proxy-5vgcq                                       kube-system
	ac6814040b30c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      About a minute ago   Running             kube-controller-manager   0                   42e326cc06ada       kube-controller-manager-default-k8s-diff-port-993283   kube-system
	8958e9306eb7e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      About a minute ago   Running             kube-apiserver            0                   91a0427e5abcf       kube-apiserver-default-k8s-diff-port-993283            kube-system
	fb06a069015ee       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      About a minute ago   Running             etcd                      0                   5944cfacd3dd6       etcd-default-k8s-diff-port-993283                      kube-system
	81e91255363d8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      About a minute ago   Running             kube-scheduler            0                   f4a15996e9ebb       kube-scheduler-default-k8s-diff-port-993283            kube-system
	
	
	==> coredns [21dac0ed5743a80d71765eaa8594ac6aac1dee99762d04fd3c219d2235a68b5d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38096 - 40895 "HINFO IN 2055696860268424311.5605408919012550514. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022754014s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-993283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-993283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_44_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:43:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:44:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:44:49 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:44:49 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:44:49 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:44:49 +0000   Mon, 08 Dec 2025 01:44:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-993283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                cf00620d-cf66-43ae-830e-048a75681d0e
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-rljsm                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-993283                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-2khbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993283             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993283    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-5vgcq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993283             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-993283 event: Registered Node default-k8s-diff-port-993283 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-993283 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 8 01:11] overlayfs: idmapped layers are currently not supported
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fb06a069015eefbd2b4e7c5df7602c2384fd9850b28e3c2878c46daba21b2112] <==
	{"level":"warn","ts":"2025-12-08T01:43:57.755776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.773638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.795290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.819176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.836967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.858879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.877357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.891765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.907490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.932949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.953559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.983338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.983995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:57.999842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.026190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.039681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.062874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.075061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.105332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.126095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.144418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.173658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.188072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.201034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:43:58.302953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54502","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:45:02 up  6:27,  0 user,  load average: 0.98, 2.14, 2.14
	Linux default-k8s-diff-port-993283 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [89f8f0b0abb2dd27f91cc61d9e31430b86fd9ea6b9ca2a266eaace89894cdc40] <==
	I1208 01:44:08.920444       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:44:08.920938       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:44:08.921084       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:44:08.921097       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:44:08.921110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:44:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:44:09.125002       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:44:09.125021       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:44:09.125030       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:44:09.125151       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:44:39.124127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:44:39.125280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:44:39.125310       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1208 01:44:39.125402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1208 01:44:40.725804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:44:40.725838       1 metrics.go:72] Registering metrics
	I1208 01:44:40.725910       1 controller.go:711] "Syncing nftables rules"
	I1208 01:44:49.125857       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:44:49.125915       1 main.go:301] handling current node
	I1208 01:44:59.122950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:44:59.123109       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8958e9306eb7ef9c3a74ca9fc6b6c938e58ba91d42dc162172ab8b3f17c201b5] <==
	I1208 01:43:59.438616       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:43:59.439088       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:43:59.440102       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1208 01:43:59.440355       1 aggregator.go:171] initial CRD sync complete...
	I1208 01:43:59.440404       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 01:43:59.440435       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 01:43:59.440462       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:44:00.029315       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1208 01:44:00.125061       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1208 01:44:00.125101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:44:01.106558       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:44:01.166764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:44:01.291060       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1208 01:44:01.299013       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1208 01:44:01.300306       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:44:01.305857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 01:44:02.175104       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:44:02.435284       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:44:02.458482       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1208 01:44:02.473158       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1208 01:44:08.029703       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:44:08.034919       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:44:08.074661       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1208 01:44:08.225202       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1208 01:45:00.869096       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:57088: use of closed network connection
	
	
	==> kube-controller-manager [ac6814040b30c96f07c201e3b465fa69b45bd9cf7370a06a7b31a634e4ec9479] <==
	I1208 01:44:07.219656       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1208 01:44:07.219741       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 01:44:07.219933       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1208 01:44:07.221052       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1208 01:44:07.224408       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 01:44:07.226558       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:44:07.231739       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:44:07.255083       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 01:44:07.260440       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:44:07.265829       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 01:44:07.267038       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:44:07.267121       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:44:07.267156       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:44:07.267180       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:44:07.267260       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 01:44:07.267268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-993283"
	I1208 01:44:07.267362       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1208 01:44:07.267186       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:44:07.268886       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 01:44:07.269100       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1208 01:44:07.269179       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 01:44:07.272919       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 01:44:07.274130       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1208 01:44:07.280317       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:44:52.274199       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0887637d984544ea3652541d35bb912497a7006b73c14bdff228e85e8c196eee] <==
	I1208 01:44:08.890921       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:44:08.999281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:44:09.010999       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:44:09.011028       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:44:09.011112       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:44:09.116188       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:44:09.116246       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:44:09.125058       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:44:09.127074       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:44:09.127093       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:44:09.136823       1 config.go:200] "Starting service config controller"
	I1208 01:44:09.136856       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:44:09.136877       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:44:09.136881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:44:09.136892       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:44:09.136896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:44:09.138961       1 config.go:309] "Starting node config controller"
	I1208 01:44:09.138981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:44:09.138988       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:44:09.243684       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:44:09.248897       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:44:09.248946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [81e91255363d824779e5891929752f8c1dd69b7792c78adbe27aa08db23771c6] <==
	E1208 01:43:59.374319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 01:43:59.374371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 01:43:59.374518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 01:43:59.374547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:43:59.374664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 01:44:00.192158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 01:44:00.245159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1208 01:44:00.292356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 01:44:00.292362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1208 01:44:00.332802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 01:44:00.389894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 01:44:00.392162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 01:44:00.519473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1208 01:44:00.519549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 01:44:00.555634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 01:44:00.582945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 01:44:00.620912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1208 01:44:00.639874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1208 01:44:00.680139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 01:44:00.727715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 01:44:00.778610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 01:44:00.785639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 01:44:00.788299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 01:44:00.798284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1208 01:44:02.762499       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:44:03 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:03.571611    1299 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-993283"
	Dec 08 01:44:03 default-k8s-diff-port-993283 kubelet[1299]: E1208 01:44:03.594312    1299 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-993283\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-993283"
	Dec 08 01:44:07 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:07.235000    1299 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 08 01:44:07 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:07.236163    1299 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387275    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1880686-7984-4078-b524-910a8c47979c-xtables-lock\") pod \"kindnet-2khbg\" (UID: \"f1880686-7984-4078-b524-910a8c47979c\") " pod="kube-system/kindnet-2khbg"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387334    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1880686-7984-4078-b524-910a8c47979c-lib-modules\") pod \"kindnet-2khbg\" (UID: \"f1880686-7984-4078-b524-910a8c47979c\") " pod="kube-system/kindnet-2khbg"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387409    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af8093a4-577c-4e9c-96df-9d8da9bf3e55-xtables-lock\") pod \"kube-proxy-5vgcq\" (UID: \"af8093a4-577c-4e9c-96df-9d8da9bf3e55\") " pod="kube-system/kube-proxy-5vgcq"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387468    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af8093a4-577c-4e9c-96df-9d8da9bf3e55-lib-modules\") pod \"kube-proxy-5vgcq\" (UID: \"af8093a4-577c-4e9c-96df-9d8da9bf3e55\") " pod="kube-system/kube-proxy-5vgcq"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387488    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lg6f\" (UniqueName: \"kubernetes.io/projected/af8093a4-577c-4e9c-96df-9d8da9bf3e55-kube-api-access-5lg6f\") pod \"kube-proxy-5vgcq\" (UID: \"af8093a4-577c-4e9c-96df-9d8da9bf3e55\") " pod="kube-system/kube-proxy-5vgcq"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387538    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f1880686-7984-4078-b524-910a8c47979c-cni-cfg\") pod \"kindnet-2khbg\" (UID: \"f1880686-7984-4078-b524-910a8c47979c\") " pod="kube-system/kindnet-2khbg"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387557    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qd6g\" (UniqueName: \"kubernetes.io/projected/f1880686-7984-4078-b524-910a8c47979c-kube-api-access-7qd6g\") pod \"kindnet-2khbg\" (UID: \"f1880686-7984-4078-b524-910a8c47979c\") " pod="kube-system/kindnet-2khbg"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.387621    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af8093a4-577c-4e9c-96df-9d8da9bf3e55-kube-proxy\") pod \"kube-proxy-5vgcq\" (UID: \"af8093a4-577c-4e9c-96df-9d8da9bf3e55\") " pod="kube-system/kube-proxy-5vgcq"
	Dec 08 01:44:08 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:08.609095    1299 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 08 01:44:09 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:09.635049    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2khbg" podStartSLOduration=1.63502762 podStartE2EDuration="1.63502762s" podCreationTimestamp="2025-12-08 01:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:44:09.619589994 +0000 UTC m=+7.353662419" watchObservedRunningTime="2025-12-08 01:44:09.63502762 +0000 UTC m=+7.369100045"
	Dec 08 01:44:09 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:09.650307    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5vgcq" podStartSLOduration=1.650285282 podStartE2EDuration="1.650285282s" podCreationTimestamp="2025-12-08 01:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:44:09.635420798 +0000 UTC m=+7.369493223" watchObservedRunningTime="2025-12-08 01:44:09.650285282 +0000 UTC m=+7.384357707"
	Dec 08 01:44:49 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:49.659230    1299 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 08 01:44:49 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:49.779795    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0c6db383-1376-476b-8750-39b98c587082-tmp\") pod \"storage-provisioner\" (UID: \"0c6db383-1376-476b-8750-39b98c587082\") " pod="kube-system/storage-provisioner"
	Dec 08 01:44:49 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:49.779856    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkvd\" (UniqueName: \"kubernetes.io/projected/cf8077ab-2473-4eb9-be28-b6159fac1ae1-kube-api-access-wxkvd\") pod \"coredns-66bc5c9577-rljsm\" (UID: \"cf8077ab-2473-4eb9-be28-b6159fac1ae1\") " pod="kube-system/coredns-66bc5c9577-rljsm"
	Dec 08 01:44:49 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:49.779881    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hsfv\" (UniqueName: \"kubernetes.io/projected/0c6db383-1376-476b-8750-39b98c587082-kube-api-access-9hsfv\") pod \"storage-provisioner\" (UID: \"0c6db383-1376-476b-8750-39b98c587082\") " pod="kube-system/storage-provisioner"
	Dec 08 01:44:49 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:49.779905    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf8077ab-2473-4eb9-be28-b6159fac1ae1-config-volume\") pod \"coredns-66bc5c9577-rljsm\" (UID: \"cf8077ab-2473-4eb9-be28-b6159fac1ae1\") " pod="kube-system/coredns-66bc5c9577-rljsm"
	Dec 08 01:44:50 default-k8s-diff-port-993283 kubelet[1299]: W1208 01:44:50.068306    1299 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-e3fb385bf6c395c4a4aabd95b8b376a3a1df7124d4a983702cfafccfe3a3bd62 WatchSource:0}: Error finding container e3fb385bf6c395c4a4aabd95b8b376a3a1df7124d4a983702cfafccfe3a3bd62: Status 404 returned error can't find the container with id e3fb385bf6c395c4a4aabd95b8b376a3a1df7124d4a983702cfafccfe3a3bd62
	Dec 08 01:44:50 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:50.702781    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rljsm" podStartSLOduration=42.702764673 podStartE2EDuration="42.702764673s" podCreationTimestamp="2025-12-08 01:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:44:50.702349489 +0000 UTC m=+48.436421922" watchObservedRunningTime="2025-12-08 01:44:50.702764673 +0000 UTC m=+48.436837123"
	Dec 08 01:44:50 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:50.739215    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.739195685 podStartE2EDuration="41.739195685s" podCreationTimestamp="2025-12-08 01:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 01:44:50.719324928 +0000 UTC m=+48.453397345" watchObservedRunningTime="2025-12-08 01:44:50.739195685 +0000 UTC m=+48.473268102"
	Dec 08 01:44:52 default-k8s-diff-port-993283 kubelet[1299]: I1208 01:44:52.798210    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n5mh\" (UniqueName: \"kubernetes.io/projected/82b0cf89-8ccd-4661-9916-328846a942d2-kube-api-access-4n5mh\") pod \"busybox\" (UID: \"82b0cf89-8ccd-4661-9916-328846a942d2\") " pod="default/busybox"
	Dec 08 01:44:53 default-k8s-diff-port-993283 kubelet[1299]: W1208 01:44:53.054173    1299 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671 WatchSource:0}: Error finding container b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671: Status 404 returned error can't find the container with id b9a1c8e8cd99c298a538edbc2150e9e81e804f4d9f274e4ac82f08fae63a3671
	
	
	==> storage-provisioner [bd49b4fa0655d2ecb3ca0ea7375cc494d39c2c39302ee304f6c24f8fdb48f1c3] <==
	I1208 01:44:50.104779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:44:50.136231       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:44:50.136360       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:44:50.139125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:50.156496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:44:50.156685       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:44:50.156889       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_37a4c2c6-e9fb-4a7e-a327-ce6b875f864f!
	I1208 01:44:50.158086       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cab3daa-bc93-478f-a8f6-505bdc952bd0", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993283_37a4c2c6-e9fb-4a7e-a327-ce6b875f864f became leader
	W1208 01:44:50.196878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:50.207292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:44:50.257817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_37a4c2c6-e9fb-4a7e-a327-ce6b875f864f!
	W1208 01:44:52.210691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:52.217703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:54.221340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:54.225607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:56.228716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:56.236198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:58.239065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:44:58.243748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:45:00.295954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:45:00.332634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:45:02.336729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:45:02.342198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-993283 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-993283 --alsologtostderr -v=1: exit status 80 (1.881657963s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-993283 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:46:20.210117 1038538 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:20.210337 1038538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:20.210365 1038538 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:20.210385 1038538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:20.210676 1038538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:20.211085 1038538 out.go:368] Setting JSON to false
	I1208 01:46:20.211142 1038538 mustload.go:66] Loading cluster: default-k8s-diff-port-993283
	I1208 01:46:20.211624 1038538 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:46:20.212214 1038538 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:46:20.228791 1038538 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:46:20.229112 1038538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:20.288047 1038538 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:46:20.278936527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:20.288674 1038538 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-993283 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1208 01:46:20.292046 1038538 out.go:179] * Pausing node default-k8s-diff-port-993283 ... 
	I1208 01:46:20.295819 1038538 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:46:20.296171 1038538 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:20.296222 1038538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:46:20.313024 1038538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:46:20.417613 1038538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:46:20.432336 1038538 pause.go:52] kubelet running: true
	I1208 01:46:20.432408 1038538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:46:20.677883 1038538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:46:20.677970 1038538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:46:20.745331 1038538 cri.go:89] found id: "04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5"
	I1208 01:46:20.745353 1038538 cri.go:89] found id: "dfda52c6c2d5a79b881816e23529a245320c288d6e1ee3012173375d03bb5e22"
	I1208 01:46:20.745359 1038538 cri.go:89] found id: "cb4ee313f10a6fb94576a8a6258932895e20daeec34523ae1785a5bb60dc5510"
	I1208 01:46:20.745363 1038538 cri.go:89] found id: "8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290"
	I1208 01:46:20.745366 1038538 cri.go:89] found id: "78ec7c222c76f0040d2984b9f18fc8cabd378412d977ffc490ac45a03fb10840"
	I1208 01:46:20.745370 1038538 cri.go:89] found id: "62a0bec36b793ac0d47cde61d186b8c66550bd166b5686cd4e35764e19bfe6e8"
	I1208 01:46:20.745373 1038538 cri.go:89] found id: "f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52"
	I1208 01:46:20.745376 1038538 cri.go:89] found id: "283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf"
	I1208 01:46:20.745379 1038538 cri.go:89] found id: "0ed11b92d0dbc90b302cb1e8297679e0137bd3ee4a68c917b318409054351ef7"
	I1208 01:46:20.745403 1038538 cri.go:89] found id: "45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	I1208 01:46:20.745411 1038538 cri.go:89] found id: "3da7e3a38b7564ab76c53ac7f7701b1a766ed67f247ad39eb03afd4d1b6cfa66"
	I1208 01:46:20.745414 1038538 cri.go:89] found id: ""
	I1208 01:46:20.745467 1038538 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:46:20.757677 1038538 retry.go:31] will retry after 201.167163ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:46:20Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:46:20.959111 1038538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:46:20.971987 1038538 pause.go:52] kubelet running: false
	I1208 01:46:20.972062 1038538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:46:21.146263 1038538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:46:21.146362 1038538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:46:21.213496 1038538 cri.go:89] found id: "04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5"
	I1208 01:46:21.213519 1038538 cri.go:89] found id: "dfda52c6c2d5a79b881816e23529a245320c288d6e1ee3012173375d03bb5e22"
	I1208 01:46:21.213524 1038538 cri.go:89] found id: "cb4ee313f10a6fb94576a8a6258932895e20daeec34523ae1785a5bb60dc5510"
	I1208 01:46:21.213528 1038538 cri.go:89] found id: "8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290"
	I1208 01:46:21.213531 1038538 cri.go:89] found id: "78ec7c222c76f0040d2984b9f18fc8cabd378412d977ffc490ac45a03fb10840"
	I1208 01:46:21.213535 1038538 cri.go:89] found id: "62a0bec36b793ac0d47cde61d186b8c66550bd166b5686cd4e35764e19bfe6e8"
	I1208 01:46:21.213537 1038538 cri.go:89] found id: "f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52"
	I1208 01:46:21.213540 1038538 cri.go:89] found id: "283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf"
	I1208 01:46:21.213543 1038538 cri.go:89] found id: "0ed11b92d0dbc90b302cb1e8297679e0137bd3ee4a68c917b318409054351ef7"
	I1208 01:46:21.213579 1038538 cri.go:89] found id: "45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	I1208 01:46:21.213584 1038538 cri.go:89] found id: "3da7e3a38b7564ab76c53ac7f7701b1a766ed67f247ad39eb03afd4d1b6cfa66"
	I1208 01:46:21.213587 1038538 cri.go:89] found id: ""
	I1208 01:46:21.213653 1038538 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:46:21.225635 1038538 retry.go:31] will retry after 504.814954ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:46:21Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:46:21.731471 1038538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:46:21.744636 1038538 pause.go:52] kubelet running: false
	I1208 01:46:21.744705 1038538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1208 01:46:21.928928 1038538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1208 01:46:21.929026 1038538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1208 01:46:21.993158 1038538 cri.go:89] found id: "04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5"
	I1208 01:46:21.993182 1038538 cri.go:89] found id: "dfda52c6c2d5a79b881816e23529a245320c288d6e1ee3012173375d03bb5e22"
	I1208 01:46:21.993187 1038538 cri.go:89] found id: "cb4ee313f10a6fb94576a8a6258932895e20daeec34523ae1785a5bb60dc5510"
	I1208 01:46:21.993191 1038538 cri.go:89] found id: "8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290"
	I1208 01:46:21.993194 1038538 cri.go:89] found id: "78ec7c222c76f0040d2984b9f18fc8cabd378412d977ffc490ac45a03fb10840"
	I1208 01:46:21.993198 1038538 cri.go:89] found id: "62a0bec36b793ac0d47cde61d186b8c66550bd166b5686cd4e35764e19bfe6e8"
	I1208 01:46:21.993201 1038538 cri.go:89] found id: "f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52"
	I1208 01:46:21.993204 1038538 cri.go:89] found id: "283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf"
	I1208 01:46:21.993206 1038538 cri.go:89] found id: "0ed11b92d0dbc90b302cb1e8297679e0137bd3ee4a68c917b318409054351ef7"
	I1208 01:46:21.993213 1038538 cri.go:89] found id: "45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	I1208 01:46:21.993216 1038538 cri.go:89] found id: "3da7e3a38b7564ab76c53ac7f7701b1a766ed67f247ad39eb03afd4d1b6cfa66"
	I1208 01:46:21.993219 1038538 cri.go:89] found id: ""
	I1208 01:46:21.993267 1038538 ssh_runner.go:195] Run: sudo runc list -f json
	I1208 01:46:22.018426 1038538 out.go:203] 
	W1208 01:46:22.021411 1038538 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:46:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:46:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1208 01:46:22.021443 1038538 out.go:285] * 
	* 
	W1208 01:46:22.028746 1038538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:46:22.031764 1038538 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-993283 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993283
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993283:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	        "Created": "2025-12-08T01:43:38.262395986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1035957,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:45:15.994040189Z",
	            "FinishedAt": "2025-12-08T01:45:15.213986306Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hosts",
	        "LogPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505-json.log",
	        "Name": "/default-k8s-diff-port-993283",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993283:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-993283",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	                "LowerDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993283",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993283/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993283",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd126244f81621fb1020548d7c1477373dd5291c8b391bfd816cca96e5a69aad",
	            "SandboxKey": "/var/run/docker/netns/fd126244f816",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993283": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:0d:88:2a:6f:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b08c231373a28dfebdc786db1bd7305a935d3afbb9f365148f132a530c3640",
	                    "EndpointID": "82d0aff6a0595d08a4209ba5c36d31e30829191d5086c0f047e059ec0da52e7c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993283",
	                        "9cfbb32a7825"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283: exit status 2 (343.656752ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25: (1.246527319s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                   │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                               │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:45:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:45:15.727636 1035829 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:45:15.727780 1035829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:45:15.727791 1035829 out.go:374] Setting ErrFile to fd 2...
	I1208 01:45:15.727796 1035829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:45:15.728045 1035829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:45:15.728407 1035829 out.go:368] Setting JSON to false
	I1208 01:45:15.729271 1035829 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23248,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:45:15.729339 1035829 start.go:143] virtualization:  
	I1208 01:45:15.732452 1035829 out.go:179] * [default-k8s-diff-port-993283] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:45:15.736135 1035829 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:45:15.736238 1035829 notify.go:221] Checking for updates...
	I1208 01:45:15.742040 1035829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:45:15.745120 1035829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:15.748090 1035829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:45:15.751116 1035829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:45:15.754031 1035829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:45:15.757687 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:15.758347 1035829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:45:15.784673 1035829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:45:15.784795 1035829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:45:15.841442 1035829 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:45:15.832256582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:45:15.841568 1035829 docker.go:319] overlay module found
	I1208 01:45:15.846510 1035829 out.go:179] * Using the docker driver based on existing profile
	I1208 01:45:15.849399 1035829 start.go:309] selected driver: docker
	I1208 01:45:15.849418 1035829 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:15.849541 1035829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:45:15.850233 1035829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:45:15.909636 1035829 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:45:15.900145487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:45:15.909955 1035829 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:45:15.910000 1035829 cni.go:84] Creating CNI manager for ""
	I1208 01:45:15.910062 1035829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:45:15.910104 1035829 start.go:353] cluster config:
	{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:15.913275 1035829 out.go:179] * Starting "default-k8s-diff-port-993283" primary control-plane node in "default-k8s-diff-port-993283" cluster
	I1208 01:45:15.916186 1035829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:45:15.919161 1035829 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:45:15.922000 1035829 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:45:15.922049 1035829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:45:15.922059 1035829 cache.go:65] Caching tarball of preloaded images
	I1208 01:45:15.922084 1035829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:45:15.922149 1035829 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:45:15.922160 1035829 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:45:15.922272 1035829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:45:15.940965 1035829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:45:15.940987 1035829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:45:15.941002 1035829 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:45:15.941031 1035829 start.go:360] acquireMachinesLock for default-k8s-diff-port-993283: {Name:mk8568f2bc3d9295af85055d5f2cebcc44a030bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:45:15.941090 1035829 start.go:364] duration metric: took 34.872µs to acquireMachinesLock for "default-k8s-diff-port-993283"
	I1208 01:45:15.941115 1035829 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:45:15.941124 1035829 fix.go:54] fixHost starting: 
	I1208 01:45:15.941384 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:15.958164 1035829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993283: state=Stopped err=<nil>
	W1208 01:45:15.958193 1035829 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:45:15.961363 1035829 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993283" ...
	I1208 01:45:15.961446 1035829 cli_runner.go:164] Run: docker start default-k8s-diff-port-993283
	I1208 01:45:16.221815 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:16.244135 1035829 kic.go:430] container "default-k8s-diff-port-993283" state is running.
	I1208 01:45:16.244524 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:16.267796 1035829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:45:16.268030 1035829 machine.go:94] provisionDockerMachine start ...
	I1208 01:45:16.268085 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:16.295243 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:16.295573 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:16.295582 1035829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:45:16.296964 1035829 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50876->127.0.0.1:33802: read: connection reset by peer
	I1208 01:45:19.454371 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:45:19.454441 1035829 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993283"
	I1208 01:45:19.454542 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:19.472337 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:19.472661 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:19.472679 1035829 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993283 && echo "default-k8s-diff-port-993283" | sudo tee /etc/hostname
	I1208 01:45:19.632498 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:45:19.632622 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:19.650561 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:19.651054 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:19.651087 1035829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993283/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:45:19.803065 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:45:19.803091 1035829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:45:19.803116 1035829 ubuntu.go:190] setting up certificates
	I1208 01:45:19.803126 1035829 provision.go:84] configureAuth start
	I1208 01:45:19.803189 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:19.819588 1035829 provision.go:143] copyHostCerts
	I1208 01:45:19.819667 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:45:19.819687 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:45:19.819769 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:45:19.819887 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:45:19.819897 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:45:19.819926 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:45:19.819992 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:45:19.820001 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:45:19.820031 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:45:19.820095 1035829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993283 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-993283 localhost minikube]
	I1208 01:45:20.031438 1035829 provision.go:177] copyRemoteCerts
	I1208 01:45:20.031525 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:45:20.031583 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.053769 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.162693 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:45:20.180306 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1208 01:45:20.198070 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:45:20.215125 1035829 provision.go:87] duration metric: took 411.975108ms to configureAuth
	I1208 01:45:20.215152 1035829 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:45:20.215343 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:20.215442 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.233947 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:20.234408 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:20.234427 1035829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:45:20.587969 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:45:20.587990 1035829 machine.go:97] duration metric: took 4.319950835s to provisionDockerMachine
	I1208 01:45:20.588002 1035829 start.go:293] postStartSetup for "default-k8s-diff-port-993283" (driver="docker")
	I1208 01:45:20.588014 1035829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:45:20.588077 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:45:20.588118 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.607779 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.714806 1035829 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:45:20.718161 1035829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:45:20.718191 1035829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:45:20.718203 1035829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:45:20.718258 1035829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:45:20.718342 1035829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:45:20.718454 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:45:20.725842 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:45:20.743774 1035829 start.go:296] duration metric: took 155.756233ms for postStartSetup
	I1208 01:45:20.743866 1035829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:45:20.743917 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.767930 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.871924 1035829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:45:20.876673 1035829 fix.go:56] duration metric: took 4.935541785s for fixHost
	I1208 01:45:20.876699 1035829 start.go:83] releasing machines lock for "default-k8s-diff-port-993283", held for 4.935595061s
	I1208 01:45:20.876777 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:20.893610 1035829 ssh_runner.go:195] Run: cat /version.json
	I1208 01:45:20.893629 1035829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:45:20.893668 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.893686 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.913797 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.924548 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:21.019396 1035829 ssh_runner.go:195] Run: systemctl --version
	I1208 01:45:21.119120 1035829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:45:21.156368 1035829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:45:21.161048 1035829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:45:21.161131 1035829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:45:21.169192 1035829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:45:21.169217 1035829 start.go:496] detecting cgroup driver to use...
	I1208 01:45:21.169265 1035829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:45:21.169324 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:45:21.185062 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:45:21.198491 1035829 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:45:21.198578 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:45:21.214811 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:45:21.228657 1035829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:45:21.346249 1035829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:45:21.459010 1035829 docker.go:234] disabling docker service ...
	I1208 01:45:21.459073 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:45:21.475101 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:45:21.488068 1035829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:45:21.636233 1035829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:45:21.758658 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:45:21.772339 1035829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:45:21.786086 1035829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:45:21.786154 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.795395 1035829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:45:21.795475 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.804211 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.813031 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.822344 1035829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:45:21.830646 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.839808 1035829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.848368 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.857110 1035829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:45:21.864538 1035829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:45:21.871892 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:21.988721 1035829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:45:22.166046 1035829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:45:22.166136 1035829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:45:22.170033 1035829 start.go:564] Will wait 60s for crictl version
	I1208 01:45:22.170115 1035829 ssh_runner.go:195] Run: which crictl
	I1208 01:45:22.173642 1035829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:45:22.197488 1035829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:45:22.197608 1035829 ssh_runner.go:195] Run: crio --version
	I1208 01:45:22.225830 1035829 ssh_runner.go:195] Run: crio --version
	I1208 01:45:22.257552 1035829 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:45:22.260572 1035829 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993283 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:45:22.280583 1035829 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:45:22.285229 1035829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:45:22.297244 1035829 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:45:22.297371 1035829 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:45:22.297426 1035829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:45:22.338003 1035829 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:45:22.338023 1035829 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:45:22.338077 1035829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:45:22.362566 1035829 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:45:22.362586 1035829 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:45:22.362594 1035829 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1208 01:45:22.362696 1035829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:45:22.362780 1035829 ssh_runner.go:195] Run: crio config
	I1208 01:45:22.442393 1035829 cni.go:84] Creating CNI manager for ""
	I1208 01:45:22.442413 1035829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:45:22.442436 1035829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:45:22.442460 1035829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993283 NodeName:default-k8s-diff-port-993283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:45:22.442583 1035829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993283"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:45:22.442655 1035829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:45:22.450992 1035829 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:45:22.451089 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:45:22.458766 1035829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1208 01:45:22.472767 1035829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:45:22.484867 1035829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1208 01:45:22.497547 1035829 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:45:22.501239 1035829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:45:22.510601 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:22.629886 1035829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:45:22.646782 1035829 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283 for IP: 192.168.85.2
	I1208 01:45:22.646807 1035829 certs.go:195] generating shared ca certs ...
	I1208 01:45:22.646824 1035829 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:22.646989 1035829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:45:22.647040 1035829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:45:22.647051 1035829 certs.go:257] generating profile certs ...
	I1208 01:45:22.647148 1035829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.key
	I1208 01:45:22.647218 1035829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1
	I1208 01:45:22.647260 1035829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key
	I1208 01:45:22.647381 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:45:22.647422 1035829 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:45:22.647435 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:45:22.647464 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:45:22.647492 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:45:22.647520 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:45:22.647571 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:45:22.648184 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:45:22.669523 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:45:22.687110 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:45:22.704910 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:45:22.725367 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1208 01:45:22.745694 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:45:22.765565 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:45:22.783532 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:45:22.803674 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:45:22.832293 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:45:22.852595 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:45:22.891242 1035829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:45:22.908687 1035829 ssh_runner.go:195] Run: openssl version
	I1208 01:45:22.915797 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.924342 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:45:22.933143 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.937137 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.937209 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.981427 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:45:22.989259 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:45:22.997912 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:45:23.011260 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.017292 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.017358 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.059706 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:45:23.067312 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.074739 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:45:23.084105 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.088058 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.088133 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.129771 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:45:23.137603 1035829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:45:23.141451 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:45:23.182413 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:45:23.223918 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:45:23.266138 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:45:23.307870 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:45:23.368977 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:45:23.419728 1035829 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:23.419865 1035829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:45:23.419975 1035829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:45:23.497166 1035829 cri.go:89] found id: "f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52"
	I1208 01:45:23.497236 1035829 cri.go:89] found id: "283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf"
	I1208 01:45:23.497254 1035829 cri.go:89] found id: ""
	I1208 01:45:23.497355 1035829 ssh_runner.go:195] Run: sudo runc list -f json
	W1208 01:45:23.518864 1035829 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:45:23Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:45:23.518991 1035829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:45:23.539871 1035829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:45:23.539942 1035829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:45:23.540027 1035829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:45:23.559232 1035829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:45:23.559755 1035829 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993283" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:23.559940 1035829 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993283" cluster setting kubeconfig missing "default-k8s-diff-port-993283" context setting]
	I1208 01:45:23.560308 1035829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.562089 1035829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:45:23.571834 1035829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:45:23.571868 1035829 kubeadm.go:602] duration metric: took 31.906957ms to restartPrimaryControlPlane
	I1208 01:45:23.571878 1035829 kubeadm.go:403] duration metric: took 152.162962ms to StartCluster
	I1208 01:45:23.571892 1035829 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.571958 1035829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:23.572605 1035829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.572823 1035829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:45:23.573121 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:23.573169 1035829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:45:23.573234 1035829 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.573248 1035829 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.573260 1035829 addons.go:248] addon storage-provisioner should already be in state true
	I1208 01:45:23.573281 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.573287 1035829 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.573309 1035829 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.573316 1035829 addons.go:248] addon dashboard should already be in state true
	I1208 01:45:23.573345 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.573741 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.573770 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.576328 1035829 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.576411 1035829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993283"
	I1208 01:45:23.577407 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.579140 1035829 out.go:179] * Verifying Kubernetes components...
	I1208 01:45:23.584180 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:23.618889 1035829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:45:23.626465 1035829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:45:23.626488 1035829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:45:23.626556 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.638933 1035829 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:45:23.646293 1035829 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:45:23.646612 1035829 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.646631 1035829 addons.go:248] addon default-storageclass should already be in state true
	I1208 01:45:23.646655 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.647169 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.649938 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:45:23.649973 1035829 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:45:23.650037 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.681188 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.703686 1035829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:45:23.703708 1035829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:45:23.703768 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.703917 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.732287 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.944649 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:45:23.944721 1035829 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:45:23.964410 1035829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:45:23.973320 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:45:23.981947 1035829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:45:23.992362 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:45:24.004854 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:45:24.004951 1035829 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:45:24.060693 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:45:24.060766 1035829 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:45:24.135342 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:45:24.135413 1035829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:45:24.181962 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:45:24.182045 1035829 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:45:24.219683 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:45:24.219708 1035829 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:45:24.233934 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:45:24.233959 1035829 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:45:24.247599 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:45:24.247625 1035829 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:45:24.260995 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:45:24.261021 1035829 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:45:24.274434 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:45:28.113635 1035829 node_ready.go:49] node "default-k8s-diff-port-993283" is "Ready"
	I1208 01:45:28.113661 1035829 node_ready.go:38] duration metric: took 4.131638169s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:45:28.113675 1035829 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:45:28.113737 1035829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:45:29.902381 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.928987828s)
	I1208 01:45:29.902471 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.910017767s)
	I1208 01:45:29.902755 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.628289119s)
	I1208 01:45:29.903023 1035829 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.789273057s)
	I1208 01:45:29.903071 1035829 api_server.go:72] duration metric: took 6.330214558s to wait for apiserver process to appear ...
	I1208 01:45:29.903078 1035829 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:45:29.903104 1035829 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1208 01:45:29.906164 1035829 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993283 addons enable metrics-server
	
	I1208 01:45:29.911974 1035829 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1208 01:45:29.914080 1035829 api_server.go:141] control plane version: v1.34.2
	I1208 01:45:29.914108 1035829 api_server.go:131] duration metric: took 11.024522ms to wait for apiserver health ...
	I1208 01:45:29.914118 1035829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:45:29.918946 1035829 system_pods.go:59] 8 kube-system pods found
	I1208 01:45:29.918990 1035829 system_pods.go:61] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:45:29.919005 1035829 system_pods.go:61] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:45:29.919013 1035829 system_pods.go:61] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1208 01:45:29.919019 1035829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:45:29.919028 1035829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:45:29.919039 1035829 system_pods.go:61] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 01:45:29.919047 1035829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:45:29.919053 1035829 system_pods.go:61] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:45:29.919059 1035829 system_pods.go:74] duration metric: took 4.935447ms to wait for pod list to return data ...
	I1208 01:45:29.919068 1035829 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:45:29.919309 1035829 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1208 01:45:29.922385 1035829 addons.go:530] duration metric: took 6.34921145s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1208 01:45:29.923616 1035829 default_sa.go:45] found service account: "default"
	I1208 01:45:29.923635 1035829 default_sa.go:55] duration metric: took 4.561001ms for default service account to be created ...
	I1208 01:45:29.923644 1035829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:45:29.926642 1035829 system_pods.go:86] 8 kube-system pods found
	I1208 01:45:29.926675 1035829 system_pods.go:89] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:45:29.926686 1035829 system_pods.go:89] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:45:29.926694 1035829 system_pods.go:89] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1208 01:45:29.926701 1035829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:45:29.926708 1035829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:45:29.926721 1035829 system_pods.go:89] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 01:45:29.926736 1035829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:45:29.926744 1035829 system_pods.go:89] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:45:29.926757 1035829 system_pods.go:126] duration metric: took 3.107742ms to wait for k8s-apps to be running ...
	I1208 01:45:29.926766 1035829 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:45:29.926822 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:45:29.942592 1035829 system_svc.go:56] duration metric: took 15.815811ms WaitForService to wait for kubelet
	I1208 01:45:29.942634 1035829 kubeadm.go:587] duration metric: took 6.369775653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:45:29.942654 1035829 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:45:29.948458 1035829 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:45:29.948494 1035829 node_conditions.go:123] node cpu capacity is 2
	I1208 01:45:29.948507 1035829 node_conditions.go:105] duration metric: took 5.846775ms to run NodePressure ...
	I1208 01:45:29.948520 1035829 start.go:242] waiting for startup goroutines ...
	I1208 01:45:29.948544 1035829 start.go:247] waiting for cluster config update ...
	I1208 01:45:29.948560 1035829 start.go:256] writing updated cluster config ...
	I1208 01:45:29.948853 1035829 ssh_runner.go:195] Run: rm -f paused
	I1208 01:45:29.952629 1035829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:45:29.956461 1035829 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 01:45:31.961662 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:33.963146 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:35.964266 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:38.463102 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:40.961774 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:42.962160 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:45.462311 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:47.961218 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:49.962207 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:51.962453 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:53.962834 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:56.461708 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:58.461814 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:00.462741 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:02.962099 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:04.963144 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	I1208 01:46:05.961923 1035829 pod_ready.go:94] pod "coredns-66bc5c9577-rljsm" is "Ready"
	I1208 01:46:05.961952 1035829 pod_ready.go:86] duration metric: took 36.005459395s for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.964734 1035829 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.969275 1035829 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:05.969299 1035829 pod_ready.go:86] duration metric: took 4.537337ms for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.971508 1035829 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.975967 1035829 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:05.975997 1035829 pod_ready.go:86] duration metric: took 4.461825ms for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.978149 1035829 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.166575 1035829 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:06.166611 1035829 pod_ready.go:86] duration metric: took 188.434813ms for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.360778 1035829 pod_ready.go:83] waiting for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.759864 1035829 pod_ready.go:94] pod "kube-proxy-5vgcq" is "Ready"
	I1208 01:46:06.759894 1035829 pod_ready.go:86] duration metric: took 399.087638ms for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.959902 1035829 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:07.359724 1035829 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:07.359802 1035829 pod_ready.go:86] duration metric: took 399.87478ms for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:07.359826 1035829 pod_ready.go:40] duration metric: took 37.407162593s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:46:07.419553 1035829 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:46:07.422799 1035829 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.154332121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.157269039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1c67846788672c25bc26ee2223cfc6eccf389bb805c7a442577672749317abd5/merged/etc/passwd: no such file or directory"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.15732234Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c67846788672c25bc26ee2223cfc6eccf389bb805c7a442577672749317abd5/merged/etc/group: no such file or directory"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.157733865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.160643197Z" level=info msg="Removing container: f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.189737412Z" level=info msg="Error loading conmon cgroup of container f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65: cgroup deleted" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.218422908Z" level=info msg="Removed container f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg/dashboard-metrics-scraper" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.228412785Z" level=info msg="Created container 04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5: kube-system/storage-provisioner/storage-provisioner" id=483e2497-7d7f-49e7-ab88-84fbeb2cf265 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.229538384Z" level=info msg="Starting container: 04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5" id=5d08fbf9-f744-4755-aa46-f84e293026b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.283178523Z" level=info msg="Started container" PID=1649 containerID=04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5 description=kube-system/storage-provisioner/storage-provisioner id=5d08fbf9-f744-4755-aa46-f84e293026b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc4b4b13cd341ddfc33e8da42effce324d500f533ed3d751f133a631c275e7a1
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.824202285Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.827815912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.82784902Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.827870854Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830904996Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830945259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830983553Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834532022Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834577339Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834601471Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838332834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838365237Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838390435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.841341097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.841376601Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	04ea27949250e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   cc4b4b13cd341       storage-provisioner                                    kube-system
	45d2eea1c9074       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   2b8078ac4fbd1       dashboard-metrics-scraper-6ffb444bf9-m8wxg             kubernetes-dashboard
	3da7e3a38b756       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago      Running             kubernetes-dashboard        0                   035e221ec72e0       kubernetes-dashboard-855c9754f9-jd27p                  kubernetes-dashboard
	dfda52c6c2d5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   964f365dd9c2b       coredns-66bc5c9577-rljsm                               kube-system
	d89feb6c5cd6f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   bf6086467ddda       busybox                                                default
	cb4ee313f10a6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   a9f8eaba593f4       kindnet-2khbg                                          kube-system
	8055d13785ee0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   cc4b4b13cd341       storage-provisioner                                    kube-system
	78ec7c222c76f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           53 seconds ago      Running             kube-proxy                  1                   e53587e0c4842       kube-proxy-5vgcq                                       kube-system
	62a0bec36b793       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           59 seconds ago      Running             kube-controller-manager     1                   bbcc61dea114c       kube-controller-manager-default-k8s-diff-port-993283   kube-system
	f9b5039d8d9fc       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           59 seconds ago      Running             kube-apiserver              1                   bcfcdb5f9be89       kube-apiserver-default-k8s-diff-port-993283            kube-system
	283340f05f5b4       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           59 seconds ago      Running             etcd                        1                   31e73b7d2144b       etcd-default-k8s-diff-port-993283                      kube-system
	0ed11b92d0dbc       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           59 seconds ago      Running             kube-scheduler              1                   44845afdba0cd       kube-scheduler-default-k8s-diff-port-993283            kube-system
	
	
	==> coredns [dfda52c6c2d5a79b881816e23529a245320c288d6e1ee3012173375d03bb5e22] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51785 - 63055 "HINFO IN 1279721720863931693.1110063235223850470. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031200979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-993283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-993283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_44_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:43:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:46:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:44:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-993283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                cf00620d-cf66-43ae-830e-048a75681d0e
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-rljsm                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 etcd-default-k8s-diff-port-993283                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-2khbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993283             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993283    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-5vgcq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993283             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m8wxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jd27p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m16s                  node-controller  Node default-k8s-diff-port-993283 event: Registered Node default-k8s-diff-port-993283 in Controller
	  Normal   NodeReady                94s                    kubelet          Node default-k8s-diff-port-993283 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-993283 event: Registered Node default-k8s-diff-port-993283 in Controller
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf] <==
	{"level":"warn","ts":"2025-12-08T01:45:26.442622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.477430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.491147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.500946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.538940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.555789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.583392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.594649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.613421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.683154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.689061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.712410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.731437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.748903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.772973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.797374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.834541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.858986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.881856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.915274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.957718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.036454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.037169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.085877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.194190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:46:23 up  6:28,  0 user,  load average: 0.94, 1.90, 2.05
	Linux default-k8s-diff-port-993283 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4ee313f10a6fb94576a8a6258932895e20daeec34523ae1785a5bb60dc5510] <==
	I1208 01:45:29.621811       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:45:29.623690       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:45:29.623847       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:45:29.623860       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:45:29.623873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:45:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:45:29.824163       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:45:29.824181       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:45:29.824202       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:45:29.825070       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:45:59.824674       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:45:59.824695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:45:59.824800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:45:59.825955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:46:01.424550       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:46:01.424724       1 metrics.go:72] Registering metrics
	I1208 01:46:01.425018       1 controller.go:711] "Syncing nftables rules"
	I1208 01:46:09.823816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:46:09.823858       1 main.go:301] handling current node
	I1208 01:46:19.833016       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:46:19.833057       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52] <==
	I1208 01:45:28.235050       1 policy_source.go:240] refreshing policies
	I1208 01:45:28.251118       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:45:28.266992       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:45:28.267062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:45:28.278983       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:45:28.280203       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 01:45:28.280224       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 01:45:28.280329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:45:28.280802       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1208 01:45:28.283241       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:45:28.285543       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1208 01:45:28.285663       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:45:28.293394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:45:28.303251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:45:28.918417       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:45:29.001484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:45:29.453486       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 01:45:29.600641       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:45:29.652419       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:45:29.665291       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:45:29.801249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.0.145"}
	I1208 01:45:29.838765       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.123.192"}
	I1208 01:45:31.456454       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:45:31.899252       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 01:45:32.001865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [62a0bec36b793ac0d47cde61d186b8c66550bd166b5686cd4e35764e19bfe6e8] <==
	I1208 01:45:31.451222       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 01:45:31.452118       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:45:31.453255       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:45:31.457169       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:45:31.457321       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:45:31.457786       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-993283"
	I1208 01:45:31.457877       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1208 01:45:31.459120       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:45:31.461983       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1208 01:45:31.465543       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:45:31.469366       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 01:45:31.470606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:45:31.492262       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 01:45:31.492324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 01:45:31.492470       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 01:45:31.492795       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:45:31.492925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 01:45:31.494447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:45:31.494504       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:45:31.494518       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:45:31.510349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:45:31.515513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:45:31.515530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:45:31.515545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:45:31.515555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-proxy [78ec7c222c76f0040d2984b9f18fc8cabd378412d977ffc490ac45a03fb10840] <==
	I1208 01:45:29.591414       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:45:29.831232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:45:29.931763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:45:29.931884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:45:29.932010       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:45:29.967643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:45:29.967766       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:45:29.977181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:45:29.977553       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:45:29.977728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:45:29.979592       1 config.go:200] "Starting service config controller"
	I1208 01:45:29.979657       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:45:29.979699       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:45:29.979738       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:45:29.979782       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:45:29.979821       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:45:29.983427       1 config.go:309] "Starting node config controller"
	I1208 01:45:29.983445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:45:29.983452       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:45:30.080682       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:45:30.080694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:45:30.080712       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ed11b92d0dbc90b302cb1e8297679e0137bd3ee4a68c917b318409054351ef7] <==
	I1208 01:45:25.634379       1 serving.go:386] Generated self-signed cert in-memory
	I1208 01:45:28.696182       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:45:28.696212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:45:28.705818       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1208 01:45:28.705856       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1208 01:45:28.705904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:45:28.705911       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:45:28.705924       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.705930       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.707117       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:45:28.707148       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:45:28.806779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.806875       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1208 01:45:28.806982       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242053     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl2v\" (UniqueName: \"kubernetes.io/projected/68a87282-2721-4422-adec-eef7cac49377-kube-api-access-9jl2v\") pod \"dashboard-metrics-scraper-6ffb444bf9-m8wxg\" (UID: \"68a87282-2721-4422-adec-eef7cac49377\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242076     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7a30d9ec-71bb-42f5-af3b-e6a942ad3064-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jd27p\" (UID: \"7a30d9ec-71bb-42f5-af3b-e6a942ad3064\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242099     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt97s\" (UniqueName: \"kubernetes.io/projected/7a30d9ec-71bb-42f5-af3b-e6a942ad3064-kube-api-access-dt97s\") pod \"kubernetes-dashboard-855c9754f9-jd27p\" (UID: \"7a30d9ec-71bb-42f5-af3b-e6a942ad3064\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: W1208 01:45:32.428984     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9 WatchSource:0}: Error finding container 035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9: Status 404 returned error can't find the container with id 035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: W1208 01:45:32.443671     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac WatchSource:0}: Error finding container 2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac: Status 404 returned error can't find the container with id 2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac
	Dec 08 01:45:35 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:35.686737     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:45:37 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:37.303516     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p" podStartSLOduration=0.974409014 podStartE2EDuration="5.303497417s" podCreationTimestamp="2025-12-08 01:45:32 +0000 UTC" firstStartedPulling="2025-12-08 01:45:32.432315849 +0000 UTC m=+9.783542532" lastFinishedPulling="2025-12-08 01:45:36.761404187 +0000 UTC m=+14.112630935" observedRunningTime="2025-12-08 01:45:37.045944142 +0000 UTC m=+14.397170825" watchObservedRunningTime="2025-12-08 01:45:37.303497417 +0000 UTC m=+14.654724100"
	Dec 08 01:45:41 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:41.034494     780 scope.go:117] "RemoveContainer" containerID="1226fb16efb5eecce3aa00575cdec75ee28b8094e4b4270871ba16d8ab7b71c5"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:42.039273     780 scope.go:117] "RemoveContainer" containerID="1226fb16efb5eecce3aa00575cdec75ee28b8094e4b4270871ba16d8ab7b71c5"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:42.039581     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:42.039735     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:43 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:43.043530     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:43 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:43.043723     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:48 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:48.711433     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:48 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:48.711617     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:59 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:59.892898     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.085971     780 scope.go:117] "RemoveContainer" containerID="8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.151345     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.151737     780 scope.go:117] "RemoveContainer" containerID="45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: E1208 01:46:00.151913     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:46:08 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:08.712079     780 scope.go:117] "RemoveContainer" containerID="45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	Dec 08 01:46:08 default-k8s-diff-port-993283 kubelet[780]: E1208 01:46:08.712279     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3da7e3a38b7564ab76c53ac7f7701b1a766ed67f247ad39eb03afd4d1b6cfa66] <==
	2025/12/08 01:45:36 Starting overwatch
	2025/12/08 01:45:36 Using namespace: kubernetes-dashboard
	2025/12/08 01:45:36 Using in-cluster config to connect to apiserver
	2025/12/08 01:45:36 Using secret token for csrf signing
	2025/12/08 01:45:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:45:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:45:36 Successful initial request to the apiserver, version: v1.34.2
	2025/12/08 01:45:36 Generating JWE encryption key
	2025/12/08 01:45:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:45:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:45:37 Initializing JWE encryption key from synchronized object
	2025/12/08 01:45:37 Creating in-cluster Sidecar client
	2025/12/08 01:45:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:45:37 Serving insecurely on HTTP port: 9090
	2025/12/08 01:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5] <==
	I1208 01:46:00.346230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:46:00.365879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:46:00.366233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:46:00.369805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:03.825563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:08.085537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:11.683444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:14.736834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.759304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.769259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:46:17.769566       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:46:17.770490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cab3daa-bc93-478f-a8f6-505bdc952bd0", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444 became leader
	I1208 01:46:17.770541       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444!
	W1208 01:46:17.774777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.784030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:46:17.871339       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444!
	W1208 01:46:19.787216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:19.795082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:21.801666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:21.807122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290] <==
	I1208 01:45:29.414792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:45:59.417346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283: exit status 2 (391.852284ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993283
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993283:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	        "Created": "2025-12-08T01:43:38.262395986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1035957,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:45:15.994040189Z",
	            "FinishedAt": "2025-12-08T01:45:15.213986306Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/hosts",
	        "LogPath": "/var/lib/docker/containers/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505-json.log",
	        "Name": "/default-k8s-diff-port-993283",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993283:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-993283",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505",
	                "LowerDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4915128d89b96a9436446c22587f638dc2283dce3faf57c4f9ea7930be72d326/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993283",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993283/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993283",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993283",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd126244f81621fb1020548d7c1477373dd5291c8b391bfd816cca96e5a69aad",
	            "SandboxKey": "/var/run/docker/netns/fd126244f816",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993283": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:0d:88:2a:6f:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b08c231373a28dfebdc786db1bd7305a935d3afbb9f365148f132a530c3640",
	                    "EndpointID": "82d0aff6a0595d08a4209ba5c36d31e30829191d5086c0f047e059ec0da52e7c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993283",
	                        "9cfbb32a7825"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283: exit status 2 (351.885041ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-993283 logs -n 25: (1.269197591s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ old-k8s-version-661561 image list --format=json                                                                                                                                                                                               │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                     │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                   │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                               │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:45:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:45:15.727636 1035829 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:45:15.727780 1035829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:45:15.727791 1035829 out.go:374] Setting ErrFile to fd 2...
	I1208 01:45:15.727796 1035829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:45:15.728045 1035829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:45:15.728407 1035829 out.go:368] Setting JSON to false
	I1208 01:45:15.729271 1035829 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23248,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:45:15.729339 1035829 start.go:143] virtualization:  
	I1208 01:45:15.732452 1035829 out.go:179] * [default-k8s-diff-port-993283] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:45:15.736135 1035829 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:45:15.736238 1035829 notify.go:221] Checking for updates...
	I1208 01:45:15.742040 1035829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:45:15.745120 1035829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:15.748090 1035829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:45:15.751116 1035829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:45:15.754031 1035829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:45:15.757687 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:15.758347 1035829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:45:15.784673 1035829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:45:15.784795 1035829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:45:15.841442 1035829 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:45:15.832256582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:45:15.841568 1035829 docker.go:319] overlay module found
	I1208 01:45:15.846510 1035829 out.go:179] * Using the docker driver based on existing profile
	I1208 01:45:15.849399 1035829 start.go:309] selected driver: docker
	I1208 01:45:15.849418 1035829 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:15.849541 1035829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:45:15.850233 1035829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:45:15.909636 1035829 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:45:15.900145487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:45:15.909955 1035829 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:45:15.910000 1035829 cni.go:84] Creating CNI manager for ""
	I1208 01:45:15.910062 1035829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:45:15.910104 1035829 start.go:353] cluster config:
	{Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:15.913275 1035829 out.go:179] * Starting "default-k8s-diff-port-993283" primary control-plane node in "default-k8s-diff-port-993283" cluster
	I1208 01:45:15.916186 1035829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:45:15.919161 1035829 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:45:15.922000 1035829 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:45:15.922049 1035829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 01:45:15.922059 1035829 cache.go:65] Caching tarball of preloaded images
	I1208 01:45:15.922084 1035829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:45:15.922149 1035829 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:45:15.922160 1035829 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 01:45:15.922272 1035829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:45:15.940965 1035829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:45:15.940987 1035829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:45:15.941002 1035829 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:45:15.941031 1035829 start.go:360] acquireMachinesLock for default-k8s-diff-port-993283: {Name:mk8568f2bc3d9295af85055d5f2cebcc44a030bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:45:15.941090 1035829 start.go:364] duration metric: took 34.872µs to acquireMachinesLock for "default-k8s-diff-port-993283"
	I1208 01:45:15.941115 1035829 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:45:15.941124 1035829 fix.go:54] fixHost starting: 
	I1208 01:45:15.941384 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:15.958164 1035829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993283: state=Stopped err=<nil>
	W1208 01:45:15.958193 1035829 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:45:15.961363 1035829 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993283" ...
	I1208 01:45:15.961446 1035829 cli_runner.go:164] Run: docker start default-k8s-diff-port-993283
	I1208 01:45:16.221815 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:16.244135 1035829 kic.go:430] container "default-k8s-diff-port-993283" state is running.
	I1208 01:45:16.244524 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:16.267796 1035829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/config.json ...
	I1208 01:45:16.268030 1035829 machine.go:94] provisionDockerMachine start ...
	I1208 01:45:16.268085 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:16.295243 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:16.295573 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:16.295582 1035829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:45:16.296964 1035829 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50876->127.0.0.1:33802: read: connection reset by peer
	I1208 01:45:19.454371 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:45:19.454441 1035829 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993283"
	I1208 01:45:19.454542 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:19.472337 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:19.472661 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:19.472679 1035829 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993283 && echo "default-k8s-diff-port-993283" | sudo tee /etc/hostname
	I1208 01:45:19.632498 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993283
	
	I1208 01:45:19.632622 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:19.650561 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:19.651054 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:19.651087 1035829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993283/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:45:19.803065 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:45:19.803091 1035829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:45:19.803116 1035829 ubuntu.go:190] setting up certificates
	I1208 01:45:19.803126 1035829 provision.go:84] configureAuth start
	I1208 01:45:19.803189 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:19.819588 1035829 provision.go:143] copyHostCerts
	I1208 01:45:19.819667 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:45:19.819687 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:45:19.819769 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:45:19.819887 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:45:19.819897 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:45:19.819926 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:45:19.819992 1035829 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:45:19.820001 1035829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:45:19.820031 1035829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:45:19.820095 1035829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993283 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-993283 localhost minikube]
	I1208 01:45:20.031438 1035829 provision.go:177] copyRemoteCerts
	I1208 01:45:20.031525 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:45:20.031583 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.053769 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.162693 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:45:20.180306 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1208 01:45:20.198070 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:45:20.215125 1035829 provision.go:87] duration metric: took 411.975108ms to configureAuth
	I1208 01:45:20.215152 1035829 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:45:20.215343 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:20.215442 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.233947 1035829 main.go:143] libmachine: Using SSH client type: native
	I1208 01:45:20.234408 1035829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1208 01:45:20.234427 1035829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:45:20.587969 1035829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:45:20.587990 1035829 machine.go:97] duration metric: took 4.319950835s to provisionDockerMachine
	I1208 01:45:20.588002 1035829 start.go:293] postStartSetup for "default-k8s-diff-port-993283" (driver="docker")
	I1208 01:45:20.588014 1035829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:45:20.588077 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:45:20.588118 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.607779 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.714806 1035829 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:45:20.718161 1035829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:45:20.718191 1035829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:45:20.718203 1035829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:45:20.718258 1035829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:45:20.718342 1035829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:45:20.718454 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:45:20.725842 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:45:20.743774 1035829 start.go:296] duration metric: took 155.756233ms for postStartSetup
	I1208 01:45:20.743866 1035829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:45:20.743917 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.767930 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.871924 1035829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:45:20.876673 1035829 fix.go:56] duration metric: took 4.935541785s for fixHost
	I1208 01:45:20.876699 1035829 start.go:83] releasing machines lock for "default-k8s-diff-port-993283", held for 4.935595061s
	I1208 01:45:20.876777 1035829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993283
	I1208 01:45:20.893610 1035829 ssh_runner.go:195] Run: cat /version.json
	I1208 01:45:20.893629 1035829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:45:20.893668 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.893686 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:20.913797 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:20.924548 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:21.019396 1035829 ssh_runner.go:195] Run: systemctl --version
	I1208 01:45:21.119120 1035829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:45:21.156368 1035829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:45:21.161048 1035829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:45:21.161131 1035829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:45:21.169192 1035829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:45:21.169217 1035829 start.go:496] detecting cgroup driver to use...
	I1208 01:45:21.169265 1035829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:45:21.169324 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:45:21.185062 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:45:21.198491 1035829 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:45:21.198578 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:45:21.214811 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:45:21.228657 1035829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:45:21.346249 1035829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:45:21.459010 1035829 docker.go:234] disabling docker service ...
	I1208 01:45:21.459073 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:45:21.475101 1035829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:45:21.488068 1035829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:45:21.636233 1035829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:45:21.758658 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:45:21.772339 1035829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:45:21.786086 1035829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:45:21.786154 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.795395 1035829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:45:21.795475 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.804211 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.813031 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.822344 1035829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:45:21.830646 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.839808 1035829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.848368 1035829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:45:21.857110 1035829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:45:21.864538 1035829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:45:21.871892 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:21.988721 1035829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:45:22.166046 1035829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:45:22.166136 1035829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:45:22.170033 1035829 start.go:564] Will wait 60s for crictl version
	I1208 01:45:22.170115 1035829 ssh_runner.go:195] Run: which crictl
	I1208 01:45:22.173642 1035829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:45:22.197488 1035829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:45:22.197608 1035829 ssh_runner.go:195] Run: crio --version
	I1208 01:45:22.225830 1035829 ssh_runner.go:195] Run: crio --version
	I1208 01:45:22.257552 1035829 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 01:45:22.260572 1035829 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993283 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:45:22.280583 1035829 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:45:22.285229 1035829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:45:22.297244 1035829 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:45:22.297371 1035829 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 01:45:22.297426 1035829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:45:22.338003 1035829 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:45:22.338023 1035829 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:45:22.338077 1035829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:45:22.362566 1035829 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:45:22.362586 1035829 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:45:22.362594 1035829 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1208 01:45:22.362696 1035829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993283 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:45:22.362780 1035829 ssh_runner.go:195] Run: crio config
	I1208 01:45:22.442393 1035829 cni.go:84] Creating CNI manager for ""
	I1208 01:45:22.442413 1035829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:45:22.442436 1035829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:45:22.442460 1035829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993283 NodeName:default-k8s-diff-port-993283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:45:22.442583 1035829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993283"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:45:22.442655 1035829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:45:22.450992 1035829 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:45:22.451089 1035829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:45:22.458766 1035829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1208 01:45:22.472767 1035829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:45:22.484867 1035829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1208 01:45:22.497547 1035829 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:45:22.501239 1035829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:45:22.510601 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:22.629886 1035829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:45:22.646782 1035829 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283 for IP: 192.168.85.2
	I1208 01:45:22.646807 1035829 certs.go:195] generating shared ca certs ...
	I1208 01:45:22.646824 1035829 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:22.646989 1035829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:45:22.647040 1035829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:45:22.647051 1035829 certs.go:257] generating profile certs ...
	I1208 01:45:22.647148 1035829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.key
	I1208 01:45:22.647218 1035829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key.42acf7b1
	I1208 01:45:22.647260 1035829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key
	I1208 01:45:22.647381 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:45:22.647422 1035829 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:45:22.647435 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:45:22.647464 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:45:22.647492 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:45:22.647520 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:45:22.647571 1035829 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:45:22.648184 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:45:22.669523 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:45:22.687110 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:45:22.704910 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:45:22.725367 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1208 01:45:22.745694 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:45:22.765565 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:45:22.783532 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:45:22.803674 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:45:22.832293 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:45:22.852595 1035829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:45:22.891242 1035829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:45:22.908687 1035829 ssh_runner.go:195] Run: openssl version
	I1208 01:45:22.915797 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.924342 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:45:22.933143 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.937137 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.937209 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:45:22.981427 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:45:22.989259 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:45:22.997912 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:45:23.011260 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.017292 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.017358 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:45:23.059706 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:45:23.067312 1035829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.074739 1035829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:45:23.084105 1035829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.088058 1035829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.088133 1035829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:45:23.129771 1035829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:45:23.137603 1035829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:45:23.141451 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:45:23.182413 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:45:23.223918 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:45:23.266138 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:45:23.307870 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:45:23.368977 1035829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:45:23.419728 1035829 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-993283 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:45:23.419865 1035829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:45:23.419975 1035829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:45:23.497166 1035829 cri.go:89] found id: "f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52"
	I1208 01:45:23.497236 1035829 cri.go:89] found id: "283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf"
	I1208 01:45:23.497254 1035829 cri.go:89] found id: ""
	I1208 01:45:23.497355 1035829 ssh_runner.go:195] Run: sudo runc list -f json
	W1208 01:45:23.518864 1035829 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:45:23Z" level=error msg="open /run/runc: no such file or directory"
	I1208 01:45:23.518991 1035829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:45:23.539871 1035829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:45:23.539942 1035829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:45:23.540027 1035829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:45:23.559232 1035829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:45:23.559755 1035829 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993283" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:23.559940 1035829 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993283" cluster setting kubeconfig missing "default-k8s-diff-port-993283" context setting]
	I1208 01:45:23.560308 1035829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.562089 1035829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:45:23.571834 1035829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:45:23.571868 1035829 kubeadm.go:602] duration metric: took 31.906957ms to restartPrimaryControlPlane
	I1208 01:45:23.571878 1035829 kubeadm.go:403] duration metric: took 152.162962ms to StartCluster
	I1208 01:45:23.571892 1035829 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.571958 1035829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:45:23.572605 1035829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:45:23.572823 1035829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:45:23.573121 1035829 config.go:182] Loaded profile config "default-k8s-diff-port-993283": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:45:23.573169 1035829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:45:23.573234 1035829 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.573248 1035829 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.573260 1035829 addons.go:248] addon storage-provisioner should already be in state true
	I1208 01:45:23.573281 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.573287 1035829 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.573309 1035829 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.573316 1035829 addons.go:248] addon dashboard should already be in state true
	I1208 01:45:23.573345 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.573741 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.573770 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.576328 1035829 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993283"
	I1208 01:45:23.576411 1035829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993283"
	I1208 01:45:23.577407 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.579140 1035829 out.go:179] * Verifying Kubernetes components...
	I1208 01:45:23.584180 1035829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:45:23.618889 1035829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:45:23.626465 1035829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:45:23.626488 1035829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:45:23.626556 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.638933 1035829 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:45:23.646293 1035829 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:45:23.646612 1035829 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993283"
	W1208 01:45:23.646631 1035829 addons.go:248] addon default-storageclass should already be in state true
	I1208 01:45:23.646655 1035829 host.go:66] Checking if "default-k8s-diff-port-993283" exists ...
	I1208 01:45:23.647169 1035829 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993283 --format={{.State.Status}}
	I1208 01:45:23.649938 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:45:23.649973 1035829 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:45:23.650037 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.681188 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.703686 1035829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:45:23.703708 1035829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:45:23.703768 1035829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993283
	I1208 01:45:23.703917 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.732287 1035829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/default-k8s-diff-port-993283/id_rsa Username:docker}
	I1208 01:45:23.944649 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:45:23.944721 1035829 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:45:23.964410 1035829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:45:23.973320 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:45:23.981947 1035829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:45:23.992362 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:45:24.004854 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:45:24.004951 1035829 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:45:24.060693 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:45:24.060766 1035829 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:45:24.135342 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:45:24.135413 1035829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:45:24.181962 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:45:24.182045 1035829 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:45:24.219683 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:45:24.219708 1035829 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:45:24.233934 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:45:24.233959 1035829 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:45:24.247599 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:45:24.247625 1035829 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:45:24.260995 1035829 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:45:24.261021 1035829 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:45:24.274434 1035829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:45:28.113635 1035829 node_ready.go:49] node "default-k8s-diff-port-993283" is "Ready"
	I1208 01:45:28.113661 1035829 node_ready.go:38] duration metric: took 4.131638169s for node "default-k8s-diff-port-993283" to be "Ready" ...
	I1208 01:45:28.113675 1035829 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:45:28.113737 1035829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:45:29.902381 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.928987828s)
	I1208 01:45:29.902471 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.910017767s)
	I1208 01:45:29.902755 1035829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.628289119s)
	I1208 01:45:29.903023 1035829 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.789273057s)
	I1208 01:45:29.903071 1035829 api_server.go:72] duration metric: took 6.330214558s to wait for apiserver process to appear ...
	I1208 01:45:29.903078 1035829 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:45:29.903104 1035829 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1208 01:45:29.906164 1035829 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993283 addons enable metrics-server
	
	I1208 01:45:29.911974 1035829 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1208 01:45:29.914080 1035829 api_server.go:141] control plane version: v1.34.2
	I1208 01:45:29.914108 1035829 api_server.go:131] duration metric: took 11.024522ms to wait for apiserver health ...
	I1208 01:45:29.914118 1035829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:45:29.918946 1035829 system_pods.go:59] 8 kube-system pods found
	I1208 01:45:29.918990 1035829 system_pods.go:61] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:45:29.919005 1035829 system_pods.go:61] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:45:29.919013 1035829 system_pods.go:61] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1208 01:45:29.919019 1035829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:45:29.919028 1035829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:45:29.919039 1035829 system_pods.go:61] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 01:45:29.919047 1035829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:45:29.919053 1035829 system_pods.go:61] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:45:29.919059 1035829 system_pods.go:74] duration metric: took 4.935447ms to wait for pod list to return data ...
	I1208 01:45:29.919068 1035829 default_sa.go:34] waiting for default service account to be created ...
	I1208 01:45:29.919309 1035829 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1208 01:45:29.922385 1035829 addons.go:530] duration metric: took 6.34921145s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1208 01:45:29.923616 1035829 default_sa.go:45] found service account: "default"
	I1208 01:45:29.923635 1035829 default_sa.go:55] duration metric: took 4.561001ms for default service account to be created ...
	I1208 01:45:29.923644 1035829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 01:45:29.926642 1035829 system_pods.go:86] 8 kube-system pods found
	I1208 01:45:29.926675 1035829 system_pods.go:89] "coredns-66bc5c9577-rljsm" [cf8077ab-2473-4eb9-be28-b6159fac1ae1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 01:45:29.926686 1035829 system_pods.go:89] "etcd-default-k8s-diff-port-993283" [b27686fa-b631-4cab-a4c6-4d10701f4f88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:45:29.926694 1035829 system_pods.go:89] "kindnet-2khbg" [f1880686-7984-4078-b524-910a8c47979c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1208 01:45:29.926701 1035829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993283" [58fb8b8e-d2a6-4e20-9ca2-c2d971d1e44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:45:29.926708 1035829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993283" [b9115a52-996e-4663-a398-c776910ec91a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:45:29.926721 1035829 system_pods.go:89] "kube-proxy-5vgcq" [af8093a4-577c-4e9c-96df-9d8da9bf3e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 01:45:29.926736 1035829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993283" [24b0d9a4-adae-4fa3-be82-094d7404e8bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:45:29.926744 1035829 system_pods.go:89] "storage-provisioner" [0c6db383-1376-476b-8750-39b98c587082] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 01:45:29.926757 1035829 system_pods.go:126] duration metric: took 3.107742ms to wait for k8s-apps to be running ...
	I1208 01:45:29.926766 1035829 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 01:45:29.926822 1035829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:45:29.942592 1035829 system_svc.go:56] duration metric: took 15.815811ms WaitForService to wait for kubelet
	I1208 01:45:29.942634 1035829 kubeadm.go:587] duration metric: took 6.369775653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:45:29.942654 1035829 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:45:29.948458 1035829 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:45:29.948494 1035829 node_conditions.go:123] node cpu capacity is 2
	I1208 01:45:29.948507 1035829 node_conditions.go:105] duration metric: took 5.846775ms to run NodePressure ...
	I1208 01:45:29.948520 1035829 start.go:242] waiting for startup goroutines ...
	I1208 01:45:29.948544 1035829 start.go:247] waiting for cluster config update ...
	I1208 01:45:29.948560 1035829 start.go:256] writing updated cluster config ...
	I1208 01:45:29.948853 1035829 ssh_runner.go:195] Run: rm -f paused
	I1208 01:45:29.952629 1035829 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:45:29.956461 1035829 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 01:45:31.961662 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:33.963146 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:35.964266 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:38.463102 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:40.961774 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:42.962160 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:45.462311 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:47.961218 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:49.962207 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:51.962453 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:53.962834 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:56.461708 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:45:58.461814 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:00.462741 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:02.962099 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	W1208 01:46:04.963144 1035829 pod_ready.go:104] pod "coredns-66bc5c9577-rljsm" is not "Ready", error: <nil>
	I1208 01:46:05.961923 1035829 pod_ready.go:94] pod "coredns-66bc5c9577-rljsm" is "Ready"
	I1208 01:46:05.961952 1035829 pod_ready.go:86] duration metric: took 36.005459395s for pod "coredns-66bc5c9577-rljsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.964734 1035829 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.969275 1035829 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:05.969299 1035829 pod_ready.go:86] duration metric: took 4.537337ms for pod "etcd-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.971508 1035829 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.975967 1035829 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:05.975997 1035829 pod_ready.go:86] duration metric: took 4.461825ms for pod "kube-apiserver-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:05.978149 1035829 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.166575 1035829 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:06.166611 1035829 pod_ready.go:86] duration metric: took 188.434813ms for pod "kube-controller-manager-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.360778 1035829 pod_ready.go:83] waiting for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.759864 1035829 pod_ready.go:94] pod "kube-proxy-5vgcq" is "Ready"
	I1208 01:46:06.759894 1035829 pod_ready.go:86] duration metric: took 399.087638ms for pod "kube-proxy-5vgcq" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:06.959902 1035829 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:07.359724 1035829 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993283" is "Ready"
	I1208 01:46:07.359802 1035829 pod_ready.go:86] duration metric: took 399.87478ms for pod "kube-scheduler-default-k8s-diff-port-993283" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 01:46:07.359826 1035829 pod_ready.go:40] duration metric: took 37.407162593s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 01:46:07.419553 1035829 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:46:07.422799 1035829 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.154332121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.157269039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1c67846788672c25bc26ee2223cfc6eccf389bb805c7a442577672749317abd5/merged/etc/passwd: no such file or directory"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.15732234Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c67846788672c25bc26ee2223cfc6eccf389bb805c7a442577672749317abd5/merged/etc/group: no such file or directory"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.157733865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.160643197Z" level=info msg="Removing container: f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.189737412Z" level=info msg="Error loading conmon cgroup of container f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65: cgroup deleted" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.218422908Z" level=info msg="Removed container f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg/dashboard-metrics-scraper" id=d7426ba2-4b46-493c-a9ec-b066dbf41456 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.228412785Z" level=info msg="Created container 04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5: kube-system/storage-provisioner/storage-provisioner" id=483e2497-7d7f-49e7-ab88-84fbeb2cf265 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.229538384Z" level=info msg="Starting container: 04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5" id=5d08fbf9-f744-4755-aa46-f84e293026b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 01:46:00 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:00.283178523Z" level=info msg="Started container" PID=1649 containerID=04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5 description=kube-system/storage-provisioner/storage-provisioner id=5d08fbf9-f744-4755-aa46-f84e293026b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc4b4b13cd341ddfc33e8da42effce324d500f533ed3d751f133a631c275e7a1
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.824202285Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.827815912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.82784902Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.827870854Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830904996Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830945259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.830983553Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834532022Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834577339Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.834601471Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838332834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838365237Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.838390435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.841341097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 08 01:46:09 default-k8s-diff-port-993283 crio[653]: time="2025-12-08T01:46:09.841376601Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	04ea27949250e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   cc4b4b13cd341       storage-provisioner                                    kube-system
	45d2eea1c9074       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   2b8078ac4fbd1       dashboard-metrics-scraper-6ffb444bf9-m8wxg             kubernetes-dashboard
	3da7e3a38b756       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   035e221ec72e0       kubernetes-dashboard-855c9754f9-jd27p                  kubernetes-dashboard
	dfda52c6c2d5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   964f365dd9c2b       coredns-66bc5c9577-rljsm                               kube-system
	d89feb6c5cd6f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   bf6086467ddda       busybox                                                default
	cb4ee313f10a6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   a9f8eaba593f4       kindnet-2khbg                                          kube-system
	8055d13785ee0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   cc4b4b13cd341       storage-provisioner                                    kube-system
	78ec7c222c76f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           55 seconds ago       Running             kube-proxy                  1                   e53587e0c4842       kube-proxy-5vgcq                                       kube-system
	62a0bec36b793       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           About a minute ago   Running             kube-controller-manager     1                   bbcc61dea114c       kube-controller-manager-default-k8s-diff-port-993283   kube-system
	f9b5039d8d9fc       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           About a minute ago   Running             kube-apiserver              1                   bcfcdb5f9be89       kube-apiserver-default-k8s-diff-port-993283            kube-system
	283340f05f5b4       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           About a minute ago   Running             etcd                        1                   31e73b7d2144b       etcd-default-k8s-diff-port-993283                      kube-system
	0ed11b92d0dbc       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           About a minute ago   Running             kube-scheduler              1                   44845afdba0cd       kube-scheduler-default-k8s-diff-port-993283            kube-system
	
	
	==> coredns [dfda52c6c2d5a79b881816e23529a245320c288d6e1ee3012173375d03bb5e22] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51785 - 63055 "HINFO IN 1279721720863931693.1110063235223850470. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031200979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993283
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-993283
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-993283
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T01_44_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 01:43:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993283
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 01:46:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:43:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 01:45:58 +0000   Mon, 08 Dec 2025 01:44:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-993283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                cf00620d-cf66-43ae-830e-048a75681d0e
	  Boot ID:                    c578946c-c2b4-4f4e-892a-39447c16cda5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-rljsm                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-993283                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-2khbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993283             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993283    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-5vgcq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993283             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m8wxg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jd27p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-993283 event: Registered Node default-k8s-diff-port-993283 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-993283 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-993283 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-993283 event: Registered Node default-k8s-diff-port-993283 in Controller
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [283340f05f5b46a9aae52daca0f23092a4fa419ac2f1bfc738ff61f703369dbf] <==
	{"level":"warn","ts":"2025-12-08T01:45:26.442622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.477430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.491147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.500946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.538940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.555789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.583392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.594649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.613421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.683154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.689061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.712410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.731437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.748903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.772973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.797374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.834541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.858986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.881856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.915274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:26.957718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.036454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.037169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.085877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T01:45:27.194190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:46:25 up  6:28,  0 user,  load average: 0.94, 1.90, 2.05
	Linux default-k8s-diff-port-993283 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4ee313f10a6fb94576a8a6258932895e20daeec34523ae1785a5bb60dc5510] <==
	I1208 01:45:29.621811       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1208 01:45:29.623690       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1208 01:45:29.623847       1 main.go:148] setting mtu 1500 for CNI 
	I1208 01:45:29.623860       1 main.go:178] kindnetd IP family: "ipv4"
	I1208 01:45:29.623873       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-08T01:45:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1208 01:45:29.824163       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1208 01:45:29.824181       1 controller.go:381] "Waiting for informer caches to sync"
	I1208 01:45:29.824202       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1208 01:45:29.825070       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1208 01:45:59.824674       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1208 01:45:59.824695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1208 01:45:59.824800       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1208 01:45:59.825955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1208 01:46:01.424550       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1208 01:46:01.424724       1 metrics.go:72] Registering metrics
	I1208 01:46:01.425018       1 controller.go:711] "Syncing nftables rules"
	I1208 01:46:09.823816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:46:09.823858       1 main.go:301] handling current node
	I1208 01:46:19.833016       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1208 01:46:19.833057       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f9b5039d8d9fc79d138ec6f63a2d7fe7ee3a778b081d8f7e3bb0735293df6b52] <==
	I1208 01:45:28.235050       1 policy_source.go:240] refreshing policies
	I1208 01:45:28.251118       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 01:45:28.266992       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 01:45:28.267062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 01:45:28.278983       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 01:45:28.280203       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 01:45:28.280224       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 01:45:28.280329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1208 01:45:28.280802       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1208 01:45:28.283241       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 01:45:28.285543       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1208 01:45:28.285663       1 cache.go:39] Caches are synced for autoregister controller
	I1208 01:45:28.293394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 01:45:28.303251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 01:45:28.918417       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 01:45:29.001484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 01:45:29.453486       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 01:45:29.600641       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 01:45:29.652419       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 01:45:29.665291       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 01:45:29.801249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.0.145"}
	I1208 01:45:29.838765       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.123.192"}
	I1208 01:45:31.456454       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 01:45:31.899252       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 01:45:32.001865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [62a0bec36b793ac0d47cde61d186b8c66550bd166b5686cd4e35764e19bfe6e8] <==
	I1208 01:45:31.451222       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1208 01:45:31.452118       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1208 01:45:31.453255       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1208 01:45:31.457169       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 01:45:31.457321       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 01:45:31.457786       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-993283"
	I1208 01:45:31.457877       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1208 01:45:31.459120       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 01:45:31.461983       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1208 01:45:31.465543       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1208 01:45:31.469366       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 01:45:31.470606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:45:31.492262       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 01:45:31.492324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1208 01:45:31.492470       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 01:45:31.492795       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1208 01:45:31.492925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 01:45:31.494447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 01:45:31.494504       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1208 01:45:31.494518       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1208 01:45:31.510349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 01:45:31.515513       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1208 01:45:31.515530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1208 01:45:31.515545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1208 01:45:31.515555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-proxy [78ec7c222c76f0040d2984b9f18fc8cabd378412d977ffc490ac45a03fb10840] <==
	I1208 01:45:29.591414       1 server_linux.go:53] "Using iptables proxy"
	I1208 01:45:29.831232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 01:45:29.931763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 01:45:29.931884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1208 01:45:29.932010       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 01:45:29.967643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 01:45:29.967766       1 server_linux.go:132] "Using iptables Proxier"
	I1208 01:45:29.977181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 01:45:29.977553       1 server.go:527] "Version info" version="v1.34.2"
	I1208 01:45:29.977728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:45:29.979592       1 config.go:200] "Starting service config controller"
	I1208 01:45:29.979657       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 01:45:29.979699       1 config.go:106] "Starting endpoint slice config controller"
	I1208 01:45:29.979738       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 01:45:29.979782       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 01:45:29.979821       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 01:45:29.983427       1 config.go:309] "Starting node config controller"
	I1208 01:45:29.983445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 01:45:29.983452       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 01:45:30.080682       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 01:45:30.080694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 01:45:30.080712       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0ed11b92d0dbc90b302cb1e8297679e0137bd3ee4a68c917b318409054351ef7] <==
	I1208 01:45:25.634379       1 serving.go:386] Generated self-signed cert in-memory
	I1208 01:45:28.696182       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 01:45:28.696212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 01:45:28.705818       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1208 01:45:28.705856       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1208 01:45:28.705904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:45:28.705911       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 01:45:28.705924       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.705930       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.707117       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 01:45:28.707148       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 01:45:28.806779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1208 01:45:28.806875       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1208 01:45:28.806982       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242053     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl2v\" (UniqueName: \"kubernetes.io/projected/68a87282-2721-4422-adec-eef7cac49377-kube-api-access-9jl2v\") pod \"dashboard-metrics-scraper-6ffb444bf9-m8wxg\" (UID: \"68a87282-2721-4422-adec-eef7cac49377\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242076     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7a30d9ec-71bb-42f5-af3b-e6a942ad3064-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jd27p\" (UID: \"7a30d9ec-71bb-42f5-af3b-e6a942ad3064\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:32.242099     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt97s\" (UniqueName: \"kubernetes.io/projected/7a30d9ec-71bb-42f5-af3b-e6a942ad3064-kube-api-access-dt97s\") pod \"kubernetes-dashboard-855c9754f9-jd27p\" (UID: \"7a30d9ec-71bb-42f5-af3b-e6a942ad3064\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p"
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: W1208 01:45:32.428984     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9 WatchSource:0}: Error finding container 035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9: Status 404 returned error can't find the container with id 035e221ec72e0cce9cdfe085328354542235e8e8ff8ff54e65d8a3f9318dfea9
	Dec 08 01:45:32 default-k8s-diff-port-993283 kubelet[780]: W1208 01:45:32.443671     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9cfbb32a782594f34165848f75e45ce165f28a708812a8062230d86c6d192505/crio-2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac WatchSource:0}: Error finding container 2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac: Status 404 returned error can't find the container with id 2b8078ac4fbd112091a4fdd7c479bf1d062edbfbcafb17603b596bfaaa1eb8ac
	Dec 08 01:45:35 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:35.686737     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 01:45:37 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:37.303516     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jd27p" podStartSLOduration=0.974409014 podStartE2EDuration="5.303497417s" podCreationTimestamp="2025-12-08 01:45:32 +0000 UTC" firstStartedPulling="2025-12-08 01:45:32.432315849 +0000 UTC m=+9.783542532" lastFinishedPulling="2025-12-08 01:45:36.761404187 +0000 UTC m=+14.112630935" observedRunningTime="2025-12-08 01:45:37.045944142 +0000 UTC m=+14.397170825" watchObservedRunningTime="2025-12-08 01:45:37.303497417 +0000 UTC m=+14.654724100"
	Dec 08 01:45:41 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:41.034494     780 scope.go:117] "RemoveContainer" containerID="1226fb16efb5eecce3aa00575cdec75ee28b8094e4b4270871ba16d8ab7b71c5"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:42.039273     780 scope.go:117] "RemoveContainer" containerID="1226fb16efb5eecce3aa00575cdec75ee28b8094e4b4270871ba16d8ab7b71c5"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:42.039581     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:42 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:42.039735     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:43 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:43.043530     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:43 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:43.043723     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:48 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:48.711433     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:45:48 default-k8s-diff-port-993283 kubelet[780]: E1208 01:45:48.711617     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:45:59 default-k8s-diff-port-993283 kubelet[780]: I1208 01:45:59.892898     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.085971     780 scope.go:117] "RemoveContainer" containerID="8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.151345     780 scope.go:117] "RemoveContainer" containerID="f9675d12e5dbfec7fcf10368635adb0e29daac22ee4053e9290c304c286a6c65"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:00.151737     780 scope.go:117] "RemoveContainer" containerID="45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	Dec 08 01:46:00 default-k8s-diff-port-993283 kubelet[780]: E1208 01:46:00.151913     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:46:08 default-k8s-diff-port-993283 kubelet[780]: I1208 01:46:08.712079     780 scope.go:117] "RemoveContainer" containerID="45d2eea1c9074245896f4a51be319ea9ffe04abbb07c2cfc634266f2e48e6850"
	Dec 08 01:46:08 default-k8s-diff-port-993283 kubelet[780]: E1208 01:46:08.712279     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m8wxg_kubernetes-dashboard(68a87282-2721-4422-adec-eef7cac49377)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m8wxg" podUID="68a87282-2721-4422-adec-eef7cac49377"
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 08 01:46:20 default-k8s-diff-port-993283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3da7e3a38b7564ab76c53ac7f7701b1a766ed67f247ad39eb03afd4d1b6cfa66] <==
	2025/12/08 01:45:36 Using namespace: kubernetes-dashboard
	2025/12/08 01:45:36 Using in-cluster config to connect to apiserver
	2025/12/08 01:45:36 Using secret token for csrf signing
	2025/12/08 01:45:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 01:45:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 01:45:36 Successful initial request to the apiserver, version: v1.34.2
	2025/12/08 01:45:36 Generating JWE encryption key
	2025/12/08 01:45:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 01:45:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 01:45:37 Initializing JWE encryption key from synchronized object
	2025/12/08 01:45:37 Creating in-cluster Sidecar client
	2025/12/08 01:45:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:45:37 Serving insecurely on HTTP port: 9090
	2025/12/08 01:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/08 01:45:36 Starting overwatch
	
	
	==> storage-provisioner [04ea27949250e782348a7708c57e902486f4171126eda2bdece55c65be95b3c5] <==
	I1208 01:46:00.346230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 01:46:00.365879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 01:46:00.366233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1208 01:46:00.369805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:03.825563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:08.085537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:11.683444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:14.736834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.759304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.769259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:46:17.769566       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 01:46:17.770490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cab3daa-bc93-478f-a8f6-505bdc952bd0", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444 became leader
	I1208 01:46:17.770541       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444!
	W1208 01:46:17.774777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:17.784030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1208 01:46:17.871339       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993283_7c23b404-54f6-4485-8bfc-8ff0b2420444!
	W1208 01:46:19.787216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:19.795082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:21.801666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:21.807122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:23.810269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 01:46:23.819239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8055d13785ee0afae1ec16115b64e9ec8fa8dedf96db092ae70e87abc06dd290] <==
	I1208 01:45:29.414792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 01:45:59.417346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283: exit status 2 (369.797596ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 01:46:35.794118  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:46.336126  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:48:51.938833  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.051669665s)

                                                
                                                
-- stdout --
	* [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:46:29.329866 1039943 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:29.330081 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330108 1039943 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:29.330126 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330385 1039943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:29.330823 1039943 out.go:368] Setting JSON to false
	I1208 01:46:29.331797 1039943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23322,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:46:29.331896 1039943 start.go:143] virtualization:  
	I1208 01:46:29.336178 1039943 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:46:29.339647 1039943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:46:29.339692 1039943 notify.go:221] Checking for updates...
	I1208 01:46:29.343070 1039943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:46:29.346748 1039943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:46:29.349908 1039943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:46:29.353489 1039943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:46:29.356725 1039943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:46:29.360434 1039943 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:29.360559 1039943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:46:29.382085 1039943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:46:29.382198 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.440774 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.431745879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.440872 1039943 docker.go:319] overlay module found
	I1208 01:46:29.444115 1039943 out.go:179] * Using the docker driver based on user configuration
	I1208 01:46:29.447050 1039943 start.go:309] selected driver: docker
	I1208 01:46:29.447088 1039943 start.go:927] validating driver "docker" against <nil>
	I1208 01:46:29.447103 1039943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:46:29.447822 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.513492 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.504737954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.513651 1039943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:46:29.513674 1039943 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:46:29.513890 1039943 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:46:29.517063 1039943 out.go:179] * Using Docker driver with root privileges
	I1208 01:46:29.519963 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:29.520039 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:29.520052 1039943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:46:29.520136 1039943 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:29.523357 1039943 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:46:29.526151 1039943 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:46:29.529015 1039943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:46:29.531940 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:29.532005 1039943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:46:29.532021 1039943 cache.go:65] Caching tarball of preloaded images
	I1208 01:46:29.532026 1039943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:46:29.532106 1039943 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:46:29.532117 1039943 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:46:29.532224 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:29.532242 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json: {Name:mk18f08541f75fcff1b0d7777fe02845efecf137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:29.551296 1039943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:46:29.551320 1039943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:46:29.551340 1039943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:46:29.551371 1039943 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:46:29.551480 1039943 start.go:364] duration metric: took 87.493µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:46:29.551523 1039943 start.go:93] Provisioning new machine with config: &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:46:29.551657 1039943 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:46:29.555023 1039943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:46:29.555251 1039943 start.go:159] libmachine.API.Create for "newest-cni-448023" (driver="docker")
	I1208 01:46:29.555289 1039943 client.go:173] LocalClient.Create starting
	I1208 01:46:29.555374 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:46:29.555413 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555432 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555492 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:46:29.555518 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555535 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555895 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:46:29.572337 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:46:29.572449 1039943 network_create.go:284] running [docker network inspect newest-cni-448023] to gather additional debugging logs...
	I1208 01:46:29.572473 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023
	W1208 01:46:29.587652 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 returned with exit code 1
	I1208 01:46:29.587681 1039943 network_create.go:287] error running [docker network inspect newest-cni-448023]: docker network inspect newest-cni-448023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-448023 not found
	I1208 01:46:29.587697 1039943 network_create.go:289] output of [docker network inspect newest-cni-448023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-448023 not found
	
	** /stderr **
	I1208 01:46:29.587791 1039943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:29.603250 1039943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:46:29.603598 1039943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:46:29.603957 1039943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:46:29.604235 1039943 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:46:29.604628 1039943 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6ec0}
	I1208 01:46:29.604652 1039943 network_create.go:124] attempt to create docker network newest-cni-448023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:46:29.604709 1039943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-448023 newest-cni-448023
	I1208 01:46:29.659267 1039943 network_create.go:108] docker network newest-cni-448023 192.168.85.0/24 created
	I1208 01:46:29.659307 1039943 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-448023" container
	I1208 01:46:29.659395 1039943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:46:29.675118 1039943 cli_runner.go:164] Run: docker volume create newest-cni-448023 --label name.minikube.sigs.k8s.io=newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:46:29.693502 1039943 oci.go:103] Successfully created a docker volume newest-cni-448023
	I1208 01:46:29.693603 1039943 cli_runner.go:164] Run: docker run --rm --name newest-cni-448023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --entrypoint /usr/bin/test -v newest-cni-448023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:46:30.260940 1039943 oci.go:107] Successfully prepared a docker volume newest-cni-448023
	I1208 01:46:30.261013 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:30.261031 1039943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:46:30.261099 1039943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:46:34.244465 1039943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983325366s)
	I1208 01:46:34.244500 1039943 kic.go:203] duration metric: took 3.983465364s to extract preloaded images to volume ...
	W1208 01:46:34.244633 1039943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:46:34.244781 1039943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:46:34.337950 1039943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-448023 --name newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-448023 --network newest-cni-448023 --ip 192.168.85.2 --volume newest-cni-448023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:46:34.625342 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Running}}
	I1208 01:46:34.649912 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.674400 1039943 cli_runner.go:164] Run: docker exec newest-cni-448023 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:46:34.723723 1039943 oci.go:144] the created container "newest-cni-448023" has a running status.
	I1208 01:46:34.723752 1039943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa...
	I1208 01:46:34.892140 1039943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:46:34.912965 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.938479 1039943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:46:34.938507 1039943 kic_runner.go:114] Args: [docker exec --privileged newest-cni-448023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:46:35.028018 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:35.058920 1039943 machine.go:94] provisionDockerMachine start ...
	I1208 01:46:35.059025 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:35.099088 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:35.099448 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:35.099466 1039943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:46:35.100020 1039943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47050->127.0.0.1:33807: read: connection reset by peer
	I1208 01:46:38.254334 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.254358 1039943 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:46:38.254421 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.272041 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.272365 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.272382 1039943 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:46:38.436500 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.436590 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.453974 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.454288 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.454304 1039943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:46:38.607227 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:46:38.607264 1039943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:46:38.607291 1039943 ubuntu.go:190] setting up certificates
	I1208 01:46:38.607301 1039943 provision.go:84] configureAuth start
	I1208 01:46:38.607362 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:38.623687 1039943 provision.go:143] copyHostCerts
	I1208 01:46:38.623751 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:46:38.623766 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:46:38.623843 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:46:38.623946 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:46:38.623958 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:46:38.623995 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:46:38.624062 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:46:38.624071 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:46:38.624096 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:46:38.624155 1039943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:46:38.807873 1039943 provision.go:177] copyRemoteCerts
	I1208 01:46:38.807949 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:46:38.808001 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.828753 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:38.934898 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:46:38.952864 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:46:38.970012 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:46:38.987418 1039943 provision.go:87] duration metric: took 380.093979ms to configureAuth
	I1208 01:46:38.987489 1039943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:46:38.987701 1039943 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:38.987812 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.021586 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:39.021916 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:39.021944 1039943 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:46:39.335041 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:46:39.335061 1039943 machine.go:97] duration metric: took 4.276119883s to provisionDockerMachine
	I1208 01:46:39.335070 1039943 client.go:176] duration metric: took 9.779771841s to LocalClient.Create
	I1208 01:46:39.335086 1039943 start.go:167] duration metric: took 9.779836023s to libmachine.API.Create "newest-cni-448023"
	I1208 01:46:39.335093 1039943 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:46:39.335105 1039943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:46:39.335174 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:46:39.335220 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.352266 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.458536 1039943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:46:39.461608 1039943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:46:39.461639 1039943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:46:39.461650 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:46:39.461705 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:46:39.461789 1039943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:46:39.461894 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:46:39.469247 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:39.486243 1039943 start.go:296] duration metric: took 151.134201ms for postStartSetup
	I1208 01:46:39.486633 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.504855 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:39.505123 1039943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:46:39.505164 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.523441 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.627950 1039943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:46:39.632598 1039943 start.go:128] duration metric: took 10.080925153s to createHost
	I1208 01:46:39.632621 1039943 start.go:83] releasing machines lock for "newest-cni-448023", held for 10.081126738s
	I1208 01:46:39.632691 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.652131 1039943 ssh_runner.go:195] Run: cat /version.json
	I1208 01:46:39.652157 1039943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:46:39.652183 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.652218 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.681809 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.682602 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.869694 1039943 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:39.876126 1039943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:46:39.913719 1039943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:46:39.918384 1039943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:46:39.918458 1039943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:46:39.947242 1039943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:46:39.947265 1039943 start.go:496] detecting cgroup driver to use...
	I1208 01:46:39.947298 1039943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:46:39.947349 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:46:39.965768 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:46:39.978168 1039943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:46:39.978234 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:46:39.995812 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:46:40.019051 1039943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:46:40.157466 1039943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:46:40.288788 1039943 docker.go:234] disabling docker service ...
	I1208 01:46:40.288897 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:46:40.314027 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:46:40.329209 1039943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:46:40.468296 1039943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:46:40.591028 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:46:40.604723 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:46:40.618613 1039943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:46:40.618699 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.627724 1039943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:46:40.627809 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.637292 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.646718 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.656124 1039943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:46:40.664289 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.672999 1039943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.686929 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.695637 1039943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:46:40.703116 1039943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:46:40.710332 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:40.834286 1039943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:46:41.006471 1039943 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:46:41.006581 1039943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:46:41.017809 1039943 start.go:564] Will wait 60s for crictl version
	I1208 01:46:41.017944 1039943 ssh_runner.go:195] Run: which crictl
	I1208 01:46:41.022606 1039943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:46:41.056937 1039943 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:46:41.057065 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.093495 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.124549 1039943 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:46:41.127395 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:41.143475 1039943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:46:41.147287 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.159892 1039943 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:46:41.162523 1039943 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:46:41.162667 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:41.162750 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.195193 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.195217 1039943 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:46:41.195275 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.220173 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.220196 1039943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:46:41.220203 1039943 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:46:41.220293 1039943 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:46:41.220379 1039943 ssh_runner.go:195] Run: crio config
	I1208 01:46:41.279892 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:41.279918 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:41.279934 1039943 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:46:41.279985 1039943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:46:41.280144 1039943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:46:41.280222 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:46:41.287843 1039943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:46:41.287924 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:46:41.295456 1039943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:46:41.308022 1039943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:46:41.324403 1039943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:46:41.337573 1039943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:46:41.341125 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.350760 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:41.469701 1039943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:46:41.486526 1039943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:46:41.486549 1039943 certs.go:195] generating shared ca certs ...
	I1208 01:46:41.486570 1039943 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.486758 1039943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:46:41.486827 1039943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:46:41.486867 1039943 certs.go:257] generating profile certs ...
	I1208 01:46:41.486942 1039943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:46:41.486953 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt with IP's: []
	I1208 01:46:41.756525 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt ...
	I1208 01:46:41.756551 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt: {Name:mk0603ae5124c088a63c1752061db6508bab22f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756725 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key ...
	I1208 01:46:41.756733 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key: {Name:mkca461b7eac0897c193e0836f61829f4e9d4b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756813 1039943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:46:41.756826 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:46:41.854144 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e ...
	I1208 01:46:41.854175 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e: {Name:mk808166fcccc166bf8bbe144226f9daaa100961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854378 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e ...
	I1208 01:46:41.854395 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e: {Name:mkad238fa32487b653b0a9f151377065f0951a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854489 1039943 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt
	I1208 01:46:41.854571 1039943 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key
	I1208 01:46:41.854631 1039943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:46:41.854650 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt with IP's: []
	I1208 01:46:42.097939 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt ...
	I1208 01:46:42.097979 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt: {Name:mk99d1d19a981d57bf4d12a2cb81e3e53a22a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098217 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key ...
	I1208 01:46:42.098235 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key: {Name:mk0c7b8d27fa7ac473db57ad4f3abf32e11a6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098441 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:46:42.098497 1039943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:46:42.098508 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:46:42.098536 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:46:42.098564 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:46:42.098594 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:46:42.098649 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:42.099505 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:46:42.123800 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:46:42.149931 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:46:42.172486 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:46:42.204182 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:46:42.225772 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:46:42.248373 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:46:42.277328 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:46:42.301927 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:46:42.325492 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:46:42.345377 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:46:42.363969 1039943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:46:42.376790 1039943 ssh_runner.go:195] Run: openssl version
	I1208 01:46:42.383055 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.390479 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:46:42.397965 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401796 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401919 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.443135 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:46:42.450626 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:46:42.458240 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.465745 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:46:42.473315 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477290 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477357 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.518810 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:46:42.527316 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:46:42.538286 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.547106 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:46:42.555430 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560073 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560165 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.601377 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.609019 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.616650 1039943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:46:42.620441 1039943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:46:42.620500 1039943 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:42.620585 1039943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:46:42.620649 1039943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:46:42.649932 1039943 cri.go:89] found id: ""
	I1208 01:46:42.650013 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:46:42.657890 1039943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:46:42.665577 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:46:42.665663 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:46:42.673380 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:46:42.673399 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:46:42.673455 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:46:42.681009 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:46:42.681082 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:46:42.688582 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:46:42.696709 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:46:42.696788 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:46:42.704191 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.711702 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:46:42.711814 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.719024 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:46:42.726923 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:46:42.727007 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:46:42.734562 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:46:42.771766 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:46:42.772014 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:46:42.846706 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:46:42.846791 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:46:42.846859 1039943 kubeadm.go:319] OS: Linux
	I1208 01:46:42.846914 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:46:42.846982 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:46:42.847042 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:46:42.847102 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:46:42.847163 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:46:42.847225 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:46:42.847283 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:46:42.847345 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:46:42.847396 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:46:42.914142 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:46:42.914273 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:46:42.914365 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:46:42.927340 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:46:42.933605 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:46:42.933772 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:46:42.933880 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:46:43.136966 1039943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:46:43.328738 1039943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:46:43.732500 1039943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:46:43.956866 1039943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:46:44.129125 1039943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:46:44.129375 1039943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.337195 1039943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:46:44.337494 1039943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.588532 1039943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:46:44.954533 1039943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:46:45.238719 1039943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:46:45.239782 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:46:45.718662 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:46:45.762985 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:46:46.020127 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:46:46.317772 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:46:46.545386 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:46:46.546080 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:46:46.549393 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:46:46.552921 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:46:46.553058 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:46:46.553140 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:46:46.553786 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:46:46.570986 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:46:46.571335 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:46:46.579342 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:46:46.579896 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:46:46.580195 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:46:46.716587 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:46:46.716716 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:50:46.717549 1039943 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001073185s
	I1208 01:50:46.717586 1039943 kubeadm.go:319] 
	I1208 01:50:46.717644 1039943 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:50:46.717683 1039943 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:50:46.717795 1039943 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:50:46.717806 1039943 kubeadm.go:319] 
	I1208 01:50:46.717911 1039943 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:50:46.717948 1039943 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:50:46.717983 1039943 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:50:46.717992 1039943 kubeadm.go:319] 
	I1208 01:50:46.724087 1039943 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:50:46.724517 1039943 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:50:46.724631 1039943 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:50:46.724869 1039943 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:50:46.724877 1039943 kubeadm.go:319] 
	I1208 01:50:46.724946 1039943 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:50:46.725066 1039943 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001073185s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001073185s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:50:46.725150 1039943 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1208 01:50:47.166268 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:50:47.186152 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:50:47.186209 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:50:47.200034 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:50:47.200050 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:50:47.200102 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:50:47.209762 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:50:47.209821 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:50:47.217791 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:50:47.226159 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:50:47.226225 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:50:47.233835 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:50:47.242110 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:50:47.242172 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:50:47.249863 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:50:47.258858 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:50:47.258917 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:50:47.266382 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:50:47.429222 1039943 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:50:47.429644 1039943 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:50:47.518556 1039943 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:54:49.847048 1039943 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:54:49.847077 1039943 kubeadm.go:319] 
	I1208 01:54:49.847149 1039943 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:54:49.852553 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:54:49.852619 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:54:49.852721 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:54:49.852785 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:54:49.852825 1039943 kubeadm.go:319] OS: Linux
	I1208 01:54:49.852870 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:54:49.852918 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:54:49.852965 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:54:49.853013 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:54:49.853072 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:54:49.853130 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:54:49.853178 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:54:49.853231 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:54:49.853284 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:54:49.853372 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:54:49.853474 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:54:49.853612 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:54:49.853714 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:54:49.856709 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:54:49.856814 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:54:49.856895 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:54:49.856984 1039943 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:54:49.857061 1039943 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:54:49.857172 1039943 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:54:49.857232 1039943 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:54:49.857326 1039943 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:54:49.857415 1039943 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:54:49.857499 1039943 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:54:49.857603 1039943 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:54:49.857682 1039943 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:54:49.857823 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:54:49.857891 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:54:49.857959 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:54:49.858019 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:54:49.858108 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:54:49.858191 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:54:49.858305 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:54:49.858378 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:54:49.863237 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:54:49.863352 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:54:49.863438 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:54:49.863515 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:54:49.863629 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:54:49.863729 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:54:49.863835 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:54:49.863923 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:54:49.863965 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:54:49.864100 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:54:49.864207 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:54:49.864274 1039943 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000263477s
	I1208 01:54:49.864282 1039943 kubeadm.go:319] 
	I1208 01:54:49.864339 1039943 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:54:49.864374 1039943 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:54:49.864481 1039943 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:54:49.864489 1039943 kubeadm.go:319] 
	I1208 01:54:49.864593 1039943 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:54:49.864629 1039943 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:54:49.864662 1039943 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:54:49.864736 1039943 kubeadm.go:403] duration metric: took 8m7.244236129s to StartCluster
	I1208 01:54:49.864786 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.864852 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.864950 1039943 kubeadm.go:319] 
	I1208 01:54:49.890049 1039943 cri.go:89] found id: ""
	I1208 01:54:49.890071 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.890079 1039943 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.890086 1039943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.890149 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.915976 1039943 cri.go:89] found id: ""
	I1208 01:54:49.916000 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.916009 1039943 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.916015 1039943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.916071 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.940080 1039943 cri.go:89] found id: ""
	I1208 01:54:49.940104 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.940113 1039943 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.940119 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.940181 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.964287 1039943 cri.go:89] found id: ""
	I1208 01:54:49.964311 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.964320 1039943 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.964327 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.964382 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.987947 1039943 cri.go:89] found id: ""
	I1208 01:54:49.987971 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.987979 1039943 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.987986 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.988043 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:50.047343 1039943 cri.go:89] found id: ""
	I1208 01:54:50.047419 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.047442 1039943 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:50.047460 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:50.047550 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:50.093548 1039943 cri.go:89] found id: ""
	I1208 01:54:50.093623 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.093648 1039943 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:50.093671 1039943 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:54:50.093712 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:54:50.130017 1039943 logs.go:123] Gathering logs for container status ...
	I1208 01:54:50.130054 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:50.161671 1039943 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:50.161708 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:50.226635 1039943 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:50.226672 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:50.244811 1039943 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:50.244841 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:50.311616 1039943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 01:54:50.311639 1039943 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:54:50.311681 1039943 out.go:285] * 
	* 
	W1208 01:54:50.311744 1039943 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.311758 1039943 out.go:285] * 
	* 
	W1208 01:54:50.313886 1039943 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:54:50.318878 1039943 out.go:203] 
	W1208 01:54:50.321774 1039943 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.321820 1039943 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:54:50.321849 1039943 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:54:50.324970 1039943 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-448023
helpers_test.go:243: (dbg) docker inspect newest-cni-448023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	        "Created": "2025-12-08T01:46:34.353152924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1040368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:46:34.40860903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hosts",
	        "LogPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9-json.log",
	        "Name": "/newest-cni-448023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-448023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-448023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	                "LowerDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-448023",
	                "Source": "/var/lib/docker/volumes/newest-cni-448023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-448023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-448023",
	                "name.minikube.sigs.k8s.io": "newest-cni-448023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54e2b1bd2134d16d4b7d139055c4702411c741fdf7b640d1372180a746c06a18",
	            "SandboxKey": "/var/run/docker/netns/54e2b1bd2134",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-448023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:88:2b:75:de:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec5af7f0fdbc70a95f83d97d8a04145286c7acd7e864f0f850cd22983b469ab7",
	                    "EndpointID": "3442b38b17971707b26d88f3f2afa853925f6fb22e828e9bc3241996d1d592b4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-448023",
	                        "ff1a1ad3010f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 6 (312.893296ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:54:50.710815 1052402 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:50:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:50:50.286498 1047159 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:50:50.286662 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.286699 1047159 out.go:374] Setting ErrFile to fd 2...
	I1208 01:50:50.286711 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.287030 1047159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:50:50.287411 1047159 out.go:368] Setting JSON to false
	I1208 01:50:50.288307 1047159 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23583,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:50:50.288377 1047159 start.go:143] virtualization:  
	I1208 01:50:50.291362 1047159 out.go:179] * [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:50:50.295297 1047159 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:50:50.295435 1047159 notify.go:221] Checking for updates...
	I1208 01:50:50.301190 1047159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:50:50.304190 1047159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:50.307152 1047159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:50:50.310056 1047159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:50:50.312896 1047159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:50:50.316257 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:50.316883 1047159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:50:50.344630 1047159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:50:50.344748 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.404071 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.394347428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.404175 1047159 docker.go:319] overlay module found
	I1208 01:50:50.409385 1047159 out.go:179] * Using the docker driver based on existing profile
	I1208 01:50:50.412316 1047159 start.go:309] selected driver: docker
	I1208 01:50:50.412334 1047159 start.go:927] validating driver "docker" against &{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.412446 1047159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:50:50.413148 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.469330 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.460395311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.469668 1047159 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:50:50.469703 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:50.469766 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:50.469803 1047159 start.go:353] cluster config:
	{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.473111 1047159 out.go:179] * Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	I1208 01:50:50.475845 1047159 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:50:50.478645 1047159 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:50:50.481298 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:50.481363 1047159 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:50:50.481427 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.481703 1047159 cache.go:107] acquiring lock: {Name:mkb488f77623cf5688783098c8af8f37e2ccf2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481784 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:50:50.481800 1047159 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.113µs
	I1208 01:50:50.481812 1047159 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:50:50.481825 1047159 cache.go:107] acquiring lock: {Name:mk46c5b5a799bb57ec4fc052703439a88454d6c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481854 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:50:50.481859 1047159 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.513µs
	I1208 01:50:50.481865 1047159 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481874 1047159 cache.go:107] acquiring lock: {Name:mkd948fd592ac79c85c21b030b5344321f29366e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481904 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:50:50.481909 1047159 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.783µs
	I1208 01:50:50.481915 1047159 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481925 1047159 cache.go:107] acquiring lock: {Name:mk937612bf3f3168a18ddaac7a61a8bae665cda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481950 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:50:50.481956 1047159 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.206µs
	I1208 01:50:50.481962 1047159 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481970 1047159 cache.go:107] acquiring lock: {Name:mk12ceb359422aeb489a7c1f33a7ec5ed809694f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481994 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:50:50.481999 1047159 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.179µs
	I1208 01:50:50.482005 1047159 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:50:50.482018 1047159 cache.go:107] acquiring lock: {Name:mk26da6a2fb489baaddcecf1a83cf045eefe1b48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482042 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:50:50.482047 1047159 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.37µs
	I1208 01:50:50.482052 1047159 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:50:50.482061 1047159 cache.go:107] acquiring lock: {Name:mk855f3a105742255ca91bc6cacb964e2740cdc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482085 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:50:50.482090 1047159 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 30.138µs
	I1208 01:50:50.482095 1047159 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:50:50.482104 1047159 cache.go:107] acquiring lock: {Name:mk695dd8e1a707c0142f2b3898e789d03306fcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482128 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:50:50.482132 1047159 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.187µs
	I1208 01:50:50.482138 1047159 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:50:50.482143 1047159 cache.go:87] Successfully saved all images to host disk.
	I1208 01:50:50.501174 1047159 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:50:50.501198 1047159 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:50:50.501214 1047159 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:50:50.501245 1047159 start.go:360] acquireMachinesLock for no-preload-389831: {Name:mkc005fe96402610ac376caa09ffa5218e546ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.501307 1047159 start.go:364] duration metric: took 39.935µs to acquireMachinesLock for "no-preload-389831"
	I1208 01:50:50.501330 1047159 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:50:50.501339 1047159 fix.go:54] fixHost starting: 
	I1208 01:50:50.501613 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.517984 1047159 fix.go:112] recreateIfNeeded on no-preload-389831: state=Stopped err=<nil>
	W1208 01:50:50.518022 1047159 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:50:50.521355 1047159 out.go:252] * Restarting existing docker container for "no-preload-389831" ...
	I1208 01:50:50.521454 1047159 cli_runner.go:164] Run: docker start no-preload-389831
	I1208 01:50:50.809627 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.833454 1047159 kic.go:430] container "no-preload-389831" state is running.
	I1208 01:50:50.833842 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:50.859105 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.859482 1047159 machine.go:94] provisionDockerMachine start ...
	I1208 01:50:50.859658 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:50.883035 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:50.883401 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:50.883410 1047159 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:50:50.884458 1047159 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37578->127.0.0.1:33812: read: connection reset by peer
	I1208 01:50:54.042538 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.042564 1047159 ubuntu.go:182] provisioning hostname "no-preload-389831"
	I1208 01:50:54.042629 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.060212 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.060523 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.060540 1047159 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-389831 && echo "no-preload-389831" | sudo tee /etc/hostname
	I1208 01:50:54.224761 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.224878 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.243516 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.243871 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.243894 1047159 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-389831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-389831/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-389831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:50:54.395341 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:50:54.395369 1047159 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:50:54.395395 1047159 ubuntu.go:190] setting up certificates
	I1208 01:50:54.395406 1047159 provision.go:84] configureAuth start
	I1208 01:50:54.395468 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:54.414388 1047159 provision.go:143] copyHostCerts
	I1208 01:50:54.414467 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:50:54.414483 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:50:54.414563 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:50:54.414673 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:50:54.414678 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:50:54.414706 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:50:54.414764 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:50:54.414768 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:50:54.414791 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:50:54.414925 1047159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.no-preload-389831 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-389831]
	I1208 01:50:55.069511 1047159 provision.go:177] copyRemoteCerts
	I1208 01:50:55.069604 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:50:55.069660 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.089775 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.199796 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:50:55.220330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:50:55.238828 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:50:55.258151 1047159 provision.go:87] duration metric: took 862.724063ms to configureAuth
	I1208 01:50:55.258179 1047159 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:50:55.258429 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:55.258562 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.279199 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:55.279708 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:55.279744 1047159 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:50:55.579255 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:50:55.579319 1047159 machine.go:97] duration metric: took 4.719823255s to provisionDockerMachine
	I1208 01:50:55.579345 1047159 start.go:293] postStartSetup for "no-preload-389831" (driver="docker")
	I1208 01:50:55.579373 1047159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:50:55.579468 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:50:55.579542 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.598239 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.702980 1047159 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:50:55.706389 1047159 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:50:55.706419 1047159 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:50:55.706430 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:50:55.706488 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:50:55.706577 1047159 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:50:55.706694 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:50:55.715414 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:55.732970 1047159 start.go:296] duration metric: took 153.595815ms for postStartSetup
	I1208 01:50:55.733056 1047159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:50:55.733110 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.750836 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.851895 1047159 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:50:55.856695 1047159 fix.go:56] duration metric: took 5.355347948s for fixHost
	I1208 01:50:55.856722 1047159 start.go:83] releasing machines lock for "no-preload-389831", held for 5.355403564s
	I1208 01:50:55.856804 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:55.873802 1047159 ssh_runner.go:195] Run: cat /version.json
	I1208 01:50:55.873860 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.874134 1047159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:50:55.874190 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.891794 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.904440 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:56.005327 1047159 ssh_runner.go:195] Run: systemctl --version
	I1208 01:50:56.106979 1047159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:50:56.144384 1047159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:50:56.149115 1047159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:50:56.149201 1047159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:50:56.157950 1047159 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:50:56.157977 1047159 start.go:496] detecting cgroup driver to use...
	I1208 01:50:56.158056 1047159 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:50:56.158131 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:50:56.173988 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:50:56.188154 1047159 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:50:56.188221 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:50:56.204007 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:50:56.217383 1047159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:50:56.340458 1047159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:50:56.458244 1047159 docker.go:234] disabling docker service ...
	I1208 01:50:56.458372 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:50:56.474961 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:50:56.487941 1047159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:50:56.612532 1047159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:50:56.731416 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:50:56.744122 1047159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:50:56.762363 1047159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:50:56.762429 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.772958 1047159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:50:56.773032 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.782289 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.793260 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.807215 1047159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:50:56.816828 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.826522 1047159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.835623 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.845020 1047159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:50:56.852794 1047159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:50:56.860249 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:56.972942 1047159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:50:57.131014 1047159 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:50:57.131096 1047159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:50:57.134814 1047159 start.go:564] Will wait 60s for crictl version
	I1208 01:50:57.134930 1047159 ssh_runner.go:195] Run: which crictl
	I1208 01:50:57.138347 1047159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:50:57.164245 1047159 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:50:57.164384 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.192737 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.223842 1047159 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:50:57.226769 1047159 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:50:57.243362 1047159 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:50:57.247217 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.257235 1047159 kubeadm.go:884] updating cluster {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:50:57.257353 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:57.257396 1047159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:50:57.289126 1047159 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:50:57.289152 1047159 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:50:57.289160 1047159 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:50:57.289257 1047159 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-389831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:50:57.289336 1047159 ssh_runner.go:195] Run: crio config
	I1208 01:50:57.362376 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:57.362445 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:57.362479 1047159 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:50:57.362529 1047159 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-389831 NodeName:no-preload-389831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:50:57.362701 1047159 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-389831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:50:57.362790 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:50:57.370735 1047159 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:50:57.370804 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:50:57.378875 1047159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:50:57.391601 1047159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:50:57.404397 1047159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:50:57.417362 1047159 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:50:57.420912 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.430378 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:57.542627 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:57.560054 1047159 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831 for IP: 192.168.76.2
	I1208 01:50:57.560086 1047159 certs.go:195] generating shared ca certs ...
	I1208 01:50:57.560102 1047159 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:57.560238 1047159 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:50:57.560289 1047159 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:50:57.560301 1047159 certs.go:257] generating profile certs ...
	I1208 01:50:57.560406 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key
	I1208 01:50:57.560476 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e
	I1208 01:50:57.560521 1047159 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key
	I1208 01:50:57.560641 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:50:57.560677 1047159 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:50:57.560689 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:50:57.560717 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:50:57.560745 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:50:57.560775 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:50:57.560824 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:57.561421 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:50:57.589599 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:50:57.607045 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:50:57.624770 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:50:57.642560 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:50:57.659981 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:50:57.677502 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:50:57.694330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:50:57.711561 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:50:57.728845 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:50:57.746226 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:50:57.763358 1047159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:50:57.775996 1047159 ssh_runner.go:195] Run: openssl version
	I1208 01:50:57.782091 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.789279 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:50:57.796521 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800117 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800178 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.840997 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:50:57.848519 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.855681 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:50:57.863319 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867059 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867155 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.909407 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:50:57.916742 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.924122 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:50:57.931834 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935527 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935597 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.976793 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:50:57.984308 1047159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:50:57.988146 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:50:58.029657 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:50:58.071087 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:50:58.113603 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:50:58.154764 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:50:58.195889 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:50:58.236998 1047159 kubeadm.go:401] StartCluster: {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:58.237105 1047159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:50:58.237204 1047159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:50:58.294166 1047159 cri.go:89] found id: ""
	I1208 01:50:58.294257 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:50:58.315702 1047159 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:50:58.315725 1047159 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:50:58.315777 1047159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:50:58.339201 1047159 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:50:58.339606 1047159 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.339709 1047159 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-389831" cluster setting kubeconfig missing "no-preload-389831" context setting]
	I1208 01:50:58.340000 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.341275 1047159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:50:58.349234 1047159 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:50:58.349268 1047159 kubeadm.go:602] duration metric: took 33.537509ms to restartPrimaryControlPlane
	I1208 01:50:58.349278 1047159 kubeadm.go:403] duration metric: took 112.291494ms to StartCluster
	I1208 01:50:58.349311 1047159 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.349387 1047159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.350038 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.350246 1047159 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:50:58.350553 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:58.350599 1047159 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:50:58.350662 1047159 addons.go:70] Setting storage-provisioner=true in profile "no-preload-389831"
	I1208 01:50:58.350682 1047159 addons.go:239] Setting addon storage-provisioner=true in "no-preload-389831"
	I1208 01:50:58.350707 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.351226 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.351698 1047159 addons.go:70] Setting dashboard=true in profile "no-preload-389831"
	I1208 01:50:58.351722 1047159 addons.go:239] Setting addon dashboard=true in "no-preload-389831"
	W1208 01:50:58.351729 1047159 addons.go:248] addon dashboard should already be in state true
	I1208 01:50:58.351754 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.352178 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.352328 1047159 addons.go:70] Setting default-storageclass=true in profile "no-preload-389831"
	I1208 01:50:58.352356 1047159 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-389831"
	I1208 01:50:58.352612 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.357573 1047159 out.go:179] * Verifying Kubernetes components...
	I1208 01:50:58.360443 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:58.387989 1047159 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:50:58.390885 1047159 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:50:58.393645 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:50:58.393668 1047159 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:50:58.393739 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.397369 1047159 addons.go:239] Setting addon default-storageclass=true in "no-preload-389831"
	I1208 01:50:58.397417 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.397928 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.404763 1047159 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:50:58.407608 1047159 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.407634 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:50:58.407695 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.415506 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.436422 1047159 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.436450 1047159 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:50:58.436511 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.465705 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.488288 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.584861 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:58.593397 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:50:58.593420 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:50:58.599131 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.612450 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:50:58.612475 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:50:58.634836 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.638144 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:50:58.638170 1047159 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:50:58.654765 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:50:58.654790 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:50:58.671149 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:50:58.671176 1047159 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:50:58.710936 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:50:58.710960 1047159 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:50:58.723710 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:50:58.723735 1047159 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:50:58.736057 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:50:58.736083 1047159 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:50:58.751933 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:58.751957 1047159 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:50:58.764645 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.017903 1047159 node_ready.go:35] waiting up to 6m0s for node "no-preload-389831" to be "Ready" ...
	W1208 01:50:59.018334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018392 1047159 retry.go:31] will retry after 331.98119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018470 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018495 1047159 retry.go:31] will retry after 297.347601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018713 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018744 1047159 retry.go:31] will retry after 160.988987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.180394 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.242451 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.242488 1047159 retry.go:31] will retry after 230.038114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.316680 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:59.351165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.388760 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.388804 1047159 retry.go:31] will retry after 306.01786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.414273 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.414313 1047159 retry.go:31] will retry after 473.308455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.473546 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.541312 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.541396 1047159 retry.go:31] will retry after 291.989778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.695757 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:50:59.766490 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.766527 1047159 retry.go:31] will retry after 640.553822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.833774 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.888354 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.905443 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.905489 1047159 retry.go:31] will retry after 440.366836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.953774 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.953806 1047159 retry.go:31] will retry after 703.737178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.346648 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:51:00.408383 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:00.427065 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.427134 1047159 retry.go:31] will retry after 1.874925767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:00.479159 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.479193 1047159 retry.go:31] will retry after 1.068550624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.658132 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:00.718468 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.718503 1047159 retry.go:31] will retry after 623.328533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:01.019492 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:01.343012 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:01.405101 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.405133 1047159 retry.go:31] will retry after 1.498168314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.548991 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:01.616790 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.616868 1047159 retry.go:31] will retry after 1.425241251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.303165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:02.370799 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.370837 1047159 retry.go:31] will retry after 1.658186868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.903558 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:02.966228 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.966264 1047159 retry.go:31] will retry after 1.304687891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.043183 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:03.103290 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.103323 1047159 retry.go:31] will retry after 1.611194242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:03.519134 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:04.029775 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:04.093970 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.094012 1047159 retry.go:31] will retry after 2.255021581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.271404 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:04.369233 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.369266 1047159 retry.go:31] will retry after 3.144995667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.715505 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:04.779555 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.779589 1047159 retry.go:31] will retry after 3.097864658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:05.519459 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:06.350184 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:06.413195 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:06.413231 1047159 retry.go:31] will retry after 2.677656272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.514488 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:07.575743 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.575780 1047159 retry.go:31] will retry after 6.329439159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.878264 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:07.943875 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.943905 1047159 retry.go:31] will retry after 2.415395367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:08.018434 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:09.092104 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:09.156844 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:09.156908 1047159 retry.go:31] will retry after 7.232089792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:10.019592 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:10.359997 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:10.420935 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:10.420968 1047159 retry.go:31] will retry after 8.971701236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:12.518554 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:13.906369 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:13.974204 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:13.974236 1047159 retry.go:31] will retry after 5.63199332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:15.018587 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:16.389784 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:16.456494 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:16.456525 1047159 retry.go:31] will retry after 8.304163321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:17.018908 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.393167 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:19.454509 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.454549 1047159 retry.go:31] will retry after 12.819064934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:19.519223 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.606483 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:19.665334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.665374 1047159 retry.go:31] will retry after 11.853810657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:22.018660 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:24.518475 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:24.760954 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:24.822030 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:24.822063 1047159 retry.go:31] will retry after 19.398232497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:26.519551 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:28.519603 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:31.018950 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:31.519706 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:31.585619 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:31.585652 1047159 retry.go:31] will retry after 9.119457049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.274696 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:32.335795 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.335830 1047159 retry.go:31] will retry after 17.730424932s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:33.519243 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:35.519358 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:38.019740 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:40.518821 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:40.706239 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:40.765447 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:40.765479 1047159 retry.go:31] will retry after 22.170334944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:43.018819 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:44.221342 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:44.285014 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:44.285052 1047159 retry.go:31] will retry after 25.025724204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:45.519041 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:48.018694 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:50.019104 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:50.066395 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:50.138630 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:50.138667 1047159 retry.go:31] will retry after 30.22765222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:52.518557 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:54.518664 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:57.018497 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:59.519498 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:02.018808 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:02.936150 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:03.008626 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:03.008665 1047159 retry.go:31] will retry after 43.423265509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:04.019439 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:06.518568 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:08.518670 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:09.311359 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:09.377364 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:09.377397 1047159 retry.go:31] will retry after 23.787430998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:10.519478 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:13.019449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:15.518771 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:18.018678 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:20.367361 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:52:20.429944 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:20.430047 1047159 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:20.519535 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:23.019133 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:25.019307 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:27.519242 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:30.018749 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:32.019308 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:33.165778 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:33.226192 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:33.226288 1047159 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:34.519469 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:37.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:39.519251 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:42.018723 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:44.019269 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:46.432093 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:46.497680 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:46.497781 1047159 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:52:46.502938 1047159 out.go:179] * Enabled addons: 
	I1208 01:52:46.505774 1047159 addons.go:530] duration metric: took 1m48.155164419s for enable addons: enabled=[]
	W1208 01:52:46.519375 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:49.018487 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:51.019331 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:53.518707 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:55.519582 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:58.019073 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:00.019588 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:02.519532 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:05.023504 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:07.518624 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:09.519024 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:11.519389 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:14.019053 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:16.518622 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:18.519227 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:21.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:23.019558 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:25.519524 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:28.019553 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:30.518668 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:33.018725 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:35.518967 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:37.519455 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:40.018547 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:42.018757 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:44.020627 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:46.518584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:49.018526 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:51.018609 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:53.518615 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:56.018528 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:58.018763 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:00.519130 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:03.018606 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:05.518787 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:08.019519 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:10.518446 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:12.518576 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:14.519449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:17.018671 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:19.518636 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:22.018525 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:24.519178 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:27.018533 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:29.518640 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:31.519126 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:33.519284 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:35.519452 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:38.018667 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:40.518930 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:42.519454 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:45.018619 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:54:49.847048 1039943 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:54:49.847077 1039943 kubeadm.go:319] 
	I1208 01:54:49.847149 1039943 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:54:49.852553 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:54:49.852619 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:54:49.852721 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:54:49.852785 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:54:49.852825 1039943 kubeadm.go:319] OS: Linux
	I1208 01:54:49.852870 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:54:49.852918 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:54:49.852965 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:54:49.853013 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:54:49.853072 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:54:49.853130 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:54:49.853178 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:54:49.853231 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:54:49.853284 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:54:49.853372 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:54:49.853474 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:54:49.853612 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:54:49.853714 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:54:49.856709 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:54:49.856814 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:54:49.856895 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:54:49.856984 1039943 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:54:49.857061 1039943 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:54:49.857172 1039943 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:54:49.857232 1039943 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:54:49.857326 1039943 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:54:49.857415 1039943 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:54:49.857499 1039943 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:54:49.857603 1039943 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:54:49.857682 1039943 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:54:49.857823 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:54:49.857891 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:54:49.857959 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:54:49.858019 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:54:49.858108 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:54:49.858191 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:54:49.858305 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:54:49.858378 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1208 01:54:47.019380 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:49.518741 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:54:49.863237 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:54:49.863352 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:54:49.863438 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:54:49.863515 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:54:49.863629 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:54:49.863729 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:54:49.863835 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:54:49.863923 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:54:49.863965 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:54:49.864100 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:54:49.864207 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:54:49.864274 1039943 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000263477s
	I1208 01:54:49.864282 1039943 kubeadm.go:319] 
	I1208 01:54:49.864339 1039943 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:54:49.864374 1039943 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:54:49.864481 1039943 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:54:49.864489 1039943 kubeadm.go:319] 
	I1208 01:54:49.864593 1039943 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:54:49.864629 1039943 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:54:49.864662 1039943 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:54:49.864736 1039943 kubeadm.go:403] duration metric: took 8m7.244236129s to StartCluster
	I1208 01:54:49.864786 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.864852 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.864950 1039943 kubeadm.go:319] 
	I1208 01:54:49.890049 1039943 cri.go:89] found id: ""
	I1208 01:54:49.890071 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.890079 1039943 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.890086 1039943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.890149 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.915976 1039943 cri.go:89] found id: ""
	I1208 01:54:49.916000 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.916009 1039943 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.916015 1039943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.916071 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.940080 1039943 cri.go:89] found id: ""
	I1208 01:54:49.940104 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.940113 1039943 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.940119 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.940181 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.964287 1039943 cri.go:89] found id: ""
	I1208 01:54:49.964311 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.964320 1039943 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.964327 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.964382 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.987947 1039943 cri.go:89] found id: ""
	I1208 01:54:49.987971 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.987979 1039943 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.987986 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.988043 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:50.047343 1039943 cri.go:89] found id: ""
	I1208 01:54:50.047419 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.047442 1039943 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:50.047460 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:50.047550 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:50.093548 1039943 cri.go:89] found id: ""
	I1208 01:54:50.093623 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.093648 1039943 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:50.093671 1039943 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:54:50.093712 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:54:50.130017 1039943 logs.go:123] Gathering logs for container status ...
	I1208 01:54:50.130054 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:50.161671 1039943 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:50.161708 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:50.226635 1039943 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:50.226672 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:50.244811 1039943 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:50.244841 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:50.311616 1039943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 01:54:50.311639 1039943 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:54:50.311681 1039943 out.go:285] * 
	W1208 01:54:50.311744 1039943 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.311758 1039943 out.go:285] * 
	W1208 01:54:50.313886 1039943 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:54:50.318878 1039943 out.go:203] 
	W1208 01:54:50.321774 1039943 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.321820 1039943 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:54:50.321849 1039943 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:54:50.324970 1039943 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995693002Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995845865Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995899314Z" level=info msg="Create NRI interface"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996004997Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996019094Z" level=info msg="runtime interface created"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996030893Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996036965Z" level=info msg="runtime interface starting up..."
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996043094Z" level=info msg="starting plugins..."
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996057051Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996114184Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:46:41 newest-cni-448023 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.917598608Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5e98e09d-a44c-41b8-bd17-ee1e89caeca7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.918797816Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4ae09c2b-38fe-49bc-adec-8679491342d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.919392727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4295c61d-2edc-437f-b3be-0511120d5e2a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.919923876Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b4468830-6f64-4d59-9957-ebdf2a248a38 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.920449551Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e0594767-1f3d-4735-bb7d-1040225db3f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.920903587Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3d7fe893-2bc7-4d91-87b7-a65a8217a281 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.921358533Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d32647af-17c0-43e7-9e16-c20f911fb4a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.52127883Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d83b1daa-f10f-4f5b-aa76-c9dc4c311d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.521944782Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f6c0a8ec-b34d-45a9-8855-7eea024dac34 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.524798818Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=bf9fcf67-7dde-4bcb-a56c-178e0a20dc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.525251451Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb63c46d-fa6e-43ca-9c60-d985a6518070 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.525684136Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=63014fa8-e5a3-4c50-b262-6167048df68d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.526643605Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cd7b0552-864c-47d0-badf-c6301cd5e261 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.527333688Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=facf0d08-f50b-421a-a5d2-9dcfee9bdbaa name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:51.403038    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:51.403837    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:51.405380    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:51.405844    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:51.407433    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:54:51 up  6:37,  0 user,  load average: 0.34, 0.79, 1.41
	Linux newest-cni-448023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:54:48 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:54:49 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Dec 08 01:54:49 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:49 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:49 newest-cni-448023 kubelet[4865]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:49 newest-cni-448023 kubelet[4865]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:49 newest-cni-448023 kubelet[4865]: E1208 01:54:49.324895    4865 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:54:49 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:54:49 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4916]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4916]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4916]: E1208 01:54:50.105966    4916 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4970]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4970]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:54:50 newest-cni-448023 kubelet[4970]: E1208 01:54:50.843300    4970 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:54:50 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 6 (351.674776ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:54:51.990139 1052631 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-448023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-389831 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-389831 create -f testdata/busybox.yaml: exit status 1 (51.546615ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-389831" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-389831 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:40:32.261581076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79193c30e8ff7cdcf99f747e987c12c0c02ab2d4b1e09c1f844845ffd7e244c8",
	            "SandboxKey": "/var/run/docker/netns/79193c30e8ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a7:b4:4f:0b:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "ac3963043985cb3c4beb5ad7f93727fc9a3cc524dd93131be5af0216706250c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 6 (364.648301ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:09.608995 1044398 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                            │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:46:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:46:29.329866 1039943 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:29.330081 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330108 1039943 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:29.330126 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330385 1039943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:29.330823 1039943 out.go:368] Setting JSON to false
	I1208 01:46:29.331797 1039943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23322,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:46:29.331896 1039943 start.go:143] virtualization:  
	I1208 01:46:29.336178 1039943 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:46:29.339647 1039943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:46:29.339692 1039943 notify.go:221] Checking for updates...
	I1208 01:46:29.343070 1039943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:46:29.346748 1039943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:46:29.349908 1039943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:46:29.353489 1039943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:46:29.356725 1039943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:46:29.360434 1039943 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:29.360559 1039943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:46:29.382085 1039943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:46:29.382198 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.440774 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.431745879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.440872 1039943 docker.go:319] overlay module found
	I1208 01:46:29.444115 1039943 out.go:179] * Using the docker driver based on user configuration
	I1208 01:46:29.447050 1039943 start.go:309] selected driver: docker
	I1208 01:46:29.447088 1039943 start.go:927] validating driver "docker" against <nil>
	I1208 01:46:29.447103 1039943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:46:29.447822 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.513492 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.504737954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.513651 1039943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:46:29.513674 1039943 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:46:29.513890 1039943 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:46:29.517063 1039943 out.go:179] * Using Docker driver with root privileges
	I1208 01:46:29.519963 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:29.520039 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:29.520052 1039943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:46:29.520136 1039943 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:29.523357 1039943 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:46:29.526151 1039943 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:46:29.529015 1039943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:46:29.531940 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:29.532005 1039943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:46:29.532021 1039943 cache.go:65] Caching tarball of preloaded images
	I1208 01:46:29.532026 1039943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:46:29.532106 1039943 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:46:29.532117 1039943 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:46:29.532224 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:29.532242 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json: {Name:mk18f08541f75fcff1b0d7777fe02845efecf137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:29.551296 1039943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:46:29.551320 1039943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:46:29.551340 1039943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:46:29.551371 1039943 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:46:29.551480 1039943 start.go:364] duration metric: took 87.493µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:46:29.551523 1039943 start.go:93] Provisioning new machine with config: &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:46:29.551657 1039943 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:46:29.555023 1039943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:46:29.555251 1039943 start.go:159] libmachine.API.Create for "newest-cni-448023" (driver="docker")
	I1208 01:46:29.555289 1039943 client.go:173] LocalClient.Create starting
	I1208 01:46:29.555374 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:46:29.555413 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555432 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555492 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:46:29.555518 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555535 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555895 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:46:29.572337 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:46:29.572449 1039943 network_create.go:284] running [docker network inspect newest-cni-448023] to gather additional debugging logs...
	I1208 01:46:29.572473 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023
	W1208 01:46:29.587652 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 returned with exit code 1
	I1208 01:46:29.587681 1039943 network_create.go:287] error running [docker network inspect newest-cni-448023]: docker network inspect newest-cni-448023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-448023 not found
	I1208 01:46:29.587697 1039943 network_create.go:289] output of [docker network inspect newest-cni-448023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-448023 not found
	
	** /stderr **
	I1208 01:46:29.587791 1039943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:29.603250 1039943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:46:29.603598 1039943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:46:29.603957 1039943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:46:29.604235 1039943 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:46:29.604628 1039943 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6ec0}
	I1208 01:46:29.604652 1039943 network_create.go:124] attempt to create docker network newest-cni-448023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:46:29.604709 1039943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-448023 newest-cni-448023
	I1208 01:46:29.659267 1039943 network_create.go:108] docker network newest-cni-448023 192.168.85.0/24 created
	I1208 01:46:29.659307 1039943 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-448023" container
	I1208 01:46:29.659395 1039943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:46:29.675118 1039943 cli_runner.go:164] Run: docker volume create newest-cni-448023 --label name.minikube.sigs.k8s.io=newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:46:29.693502 1039943 oci.go:103] Successfully created a docker volume newest-cni-448023
	I1208 01:46:29.693603 1039943 cli_runner.go:164] Run: docker run --rm --name newest-cni-448023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --entrypoint /usr/bin/test -v newest-cni-448023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:46:30.260940 1039943 oci.go:107] Successfully prepared a docker volume newest-cni-448023
	I1208 01:46:30.261013 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:30.261031 1039943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:46:30.261099 1039943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:46:34.244465 1039943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983325366s)
	I1208 01:46:34.244500 1039943 kic.go:203] duration metric: took 3.983465364s to extract preloaded images to volume ...
	W1208 01:46:34.244633 1039943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:46:34.244781 1039943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:46:34.337950 1039943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-448023 --name newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-448023 --network newest-cni-448023 --ip 192.168.85.2 --volume newest-cni-448023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:46:34.625342 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Running}}
	I1208 01:46:34.649912 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.674400 1039943 cli_runner.go:164] Run: docker exec newest-cni-448023 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:46:34.723723 1039943 oci.go:144] the created container "newest-cni-448023" has a running status.
	I1208 01:46:34.723752 1039943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa...
	I1208 01:46:34.892140 1039943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:46:34.912965 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.938479 1039943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:46:34.938507 1039943 kic_runner.go:114] Args: [docker exec --privileged newest-cni-448023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:46:35.028018 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:35.058920 1039943 machine.go:94] provisionDockerMachine start ...
	I1208 01:46:35.059025 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:35.099088 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:35.099448 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:35.099466 1039943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:46:35.100020 1039943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47050->127.0.0.1:33807: read: connection reset by peer
	I1208 01:46:38.254334 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.254358 1039943 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:46:38.254421 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.272041 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.272365 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.272382 1039943 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:46:38.436500 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.436590 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.453974 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.454288 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.454304 1039943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:46:38.607227 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:46:38.607264 1039943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:46:38.607291 1039943 ubuntu.go:190] setting up certificates
	I1208 01:46:38.607301 1039943 provision.go:84] configureAuth start
	I1208 01:46:38.607362 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:38.623687 1039943 provision.go:143] copyHostCerts
	I1208 01:46:38.623751 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:46:38.623766 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:46:38.623843 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:46:38.623946 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:46:38.623958 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:46:38.623995 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:46:38.624062 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:46:38.624071 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:46:38.624096 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:46:38.624155 1039943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:46:38.807873 1039943 provision.go:177] copyRemoteCerts
	I1208 01:46:38.807949 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:46:38.808001 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.828753 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:38.934898 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:46:38.952864 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:46:38.970012 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:46:38.987418 1039943 provision.go:87] duration metric: took 380.093979ms to configureAuth
	I1208 01:46:38.987489 1039943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:46:38.987701 1039943 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:38.987812 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.021586 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:39.021916 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:39.021944 1039943 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:46:39.335041 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:46:39.335061 1039943 machine.go:97] duration metric: took 4.276119883s to provisionDockerMachine
	I1208 01:46:39.335070 1039943 client.go:176] duration metric: took 9.779771841s to LocalClient.Create
	I1208 01:46:39.335086 1039943 start.go:167] duration metric: took 9.779836023s to libmachine.API.Create "newest-cni-448023"
	I1208 01:46:39.335093 1039943 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:46:39.335105 1039943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:46:39.335174 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:46:39.335220 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.352266 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.458536 1039943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:46:39.461608 1039943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:46:39.461639 1039943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:46:39.461650 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:46:39.461705 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:46:39.461789 1039943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:46:39.461894 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:46:39.469247 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:39.486243 1039943 start.go:296] duration metric: took 151.134201ms for postStartSetup
	I1208 01:46:39.486633 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.504855 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:39.505123 1039943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:46:39.505164 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.523441 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.627950 1039943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:46:39.632598 1039943 start.go:128] duration metric: took 10.080925153s to createHost
	I1208 01:46:39.632621 1039943 start.go:83] releasing machines lock for "newest-cni-448023", held for 10.081126738s
	I1208 01:46:39.632691 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.652131 1039943 ssh_runner.go:195] Run: cat /version.json
	I1208 01:46:39.652157 1039943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:46:39.652183 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.652218 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.681809 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.682602 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.869694 1039943 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:39.876126 1039943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:46:39.913719 1039943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:46:39.918384 1039943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:46:39.918458 1039943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:46:39.947242 1039943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:46:39.947265 1039943 start.go:496] detecting cgroup driver to use...
	I1208 01:46:39.947298 1039943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:46:39.947349 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:46:39.965768 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:46:39.978168 1039943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:46:39.978234 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:46:39.995812 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:46:40.019051 1039943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:46:40.157466 1039943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:46:40.288788 1039943 docker.go:234] disabling docker service ...
	I1208 01:46:40.288897 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:46:40.314027 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:46:40.329209 1039943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:46:40.468296 1039943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:46:40.591028 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:46:40.604723 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:46:40.618613 1039943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:46:40.618699 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.627724 1039943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:46:40.627809 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.637292 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.646718 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.656124 1039943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:46:40.664289 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.672999 1039943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.686929 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.695637 1039943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:46:40.703116 1039943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:46:40.710332 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:40.834286 1039943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:46:41.006471 1039943 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:46:41.006581 1039943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:46:41.017809 1039943 start.go:564] Will wait 60s for crictl version
	I1208 01:46:41.017944 1039943 ssh_runner.go:195] Run: which crictl
	I1208 01:46:41.022606 1039943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:46:41.056937 1039943 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:46:41.057065 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.093495 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.124549 1039943 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:46:41.127395 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:41.143475 1039943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:46:41.147287 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.159892 1039943 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:46:41.162523 1039943 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:46:41.162667 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:41.162750 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.195193 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.195217 1039943 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:46:41.195275 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.220173 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.220196 1039943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:46:41.220203 1039943 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:46:41.220293 1039943 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:46:41.220379 1039943 ssh_runner.go:195] Run: crio config
	I1208 01:46:41.279892 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:41.279918 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:41.279934 1039943 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:46:41.279985 1039943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:46:41.280144 1039943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:46:41.280222 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:46:41.287843 1039943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:46:41.287924 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:46:41.295456 1039943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:46:41.308022 1039943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:46:41.324403 1039943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:46:41.337573 1039943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:46:41.341125 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.350760 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:41.469701 1039943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:46:41.486526 1039943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:46:41.486549 1039943 certs.go:195] generating shared ca certs ...
	I1208 01:46:41.486570 1039943 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.486758 1039943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:46:41.486827 1039943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:46:41.486867 1039943 certs.go:257] generating profile certs ...
	I1208 01:46:41.486942 1039943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:46:41.486953 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt with IP's: []
	I1208 01:46:41.756525 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt ...
	I1208 01:46:41.756551 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt: {Name:mk0603ae5124c088a63c1752061db6508bab22f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756725 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key ...
	I1208 01:46:41.756733 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key: {Name:mkca461b7eac0897c193e0836f61829f4e9d4b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756813 1039943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:46:41.756826 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:46:41.854144 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e ...
	I1208 01:46:41.854175 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e: {Name:mk808166fcccc166bf8bbe144226f9daaa100961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854378 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e ...
	I1208 01:46:41.854395 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e: {Name:mkad238fa32487b653b0a9f151377065f0951a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854489 1039943 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt
	I1208 01:46:41.854571 1039943 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key
	I1208 01:46:41.854631 1039943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:46:41.854650 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt with IP's: []
	I1208 01:46:42.097939 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt ...
	I1208 01:46:42.097979 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt: {Name:mk99d1d19a981d57bf4d12a2cb81e3e53a22a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098217 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key ...
	I1208 01:46:42.098235 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key: {Name:mk0c7b8d27fa7ac473db57ad4f3abf32e11a6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098441 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:46:42.098497 1039943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:46:42.098508 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:46:42.098536 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:46:42.098564 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:46:42.098594 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:46:42.098649 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:42.099505 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:46:42.123800 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:46:42.149931 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:46:42.172486 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:46:42.204182 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:46:42.225772 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:46:42.248373 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:46:42.277328 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:46:42.301927 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:46:42.325492 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:46:42.345377 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:46:42.363969 1039943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:46:42.376790 1039943 ssh_runner.go:195] Run: openssl version
	I1208 01:46:42.383055 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.390479 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:46:42.397965 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401796 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401919 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.443135 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:46:42.450626 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:46:42.458240 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.465745 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:46:42.473315 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477290 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477357 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.518810 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:46:42.527316 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:46:42.538286 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.547106 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:46:42.555430 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560073 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560165 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.601377 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.609019 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.616650 1039943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:46:42.620441 1039943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:46:42.620500 1039943 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:42.620585 1039943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:46:42.620649 1039943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:46:42.649932 1039943 cri.go:89] found id: ""
	I1208 01:46:42.650013 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:46:42.657890 1039943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:46:42.665577 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:46:42.665663 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:46:42.673380 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:46:42.673399 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:46:42.673455 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:46:42.681009 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:46:42.681082 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:46:42.688582 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:46:42.696709 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:46:42.696788 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:46:42.704191 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.711702 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:46:42.711814 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.719024 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:46:42.726923 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:46:42.727007 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:46:42.734562 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:46:42.771766 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:46:42.772014 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:46:42.846706 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:46:42.846791 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:46:42.846859 1039943 kubeadm.go:319] OS: Linux
	I1208 01:46:42.846914 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:46:42.846982 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:46:42.847042 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:46:42.847102 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:46:42.847163 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:46:42.847225 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:46:42.847283 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:46:42.847345 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:46:42.847396 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:46:42.914142 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:46:42.914273 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:46:42.914365 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:46:42.927340 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:46:42.933605 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:46:42.933772 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:46:42.933880 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:46:43.136966 1039943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:46:43.328738 1039943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:46:43.732500 1039943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:46:43.956866 1039943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:46:44.129125 1039943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:46:44.129375 1039943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.337195 1039943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:46:44.337494 1039943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.588532 1039943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:46:44.954533 1039943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:46:45.238719 1039943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:46:45.239782 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:46:45.718662 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:46:45.762985 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:46:46.020127 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:46:46.317772 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:46:46.545386 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:46:46.546080 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:46:46.549393 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:46:46.552921 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:46:46.553058 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:46:46.553140 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:46:46.553786 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:46:46.570986 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:46:46.571335 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:46:46.579342 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:46:46.579896 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:46:46.580195 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:46:46.716587 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:46:46.716716 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:49:07.156332 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001184239s
	I1208 01:49:07.156375 1021094 kubeadm.go:319] 
	I1208 01:49:07.156475 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:49:07.156683 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:49:07.156865 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:49:07.156875 1021094 kubeadm.go:319] 
	I1208 01:49:07.157056 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:49:07.157354 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:49:07.157410 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:49:07.157416 1021094 kubeadm.go:319] 
	I1208 01:49:07.162909 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:49:07.163434 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:49:07.163569 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:49:07.163832 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:49:07.163845 1021094 kubeadm.go:319] 
	I1208 01:49:07.163964 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:49:07.163990 1021094 kubeadm.go:403] duration metric: took 8m8.109200094s to StartCluster
	I1208 01:49:07.164030 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:49:07.164092 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:49:07.189444 1021094 cri.go:89] found id: ""
	I1208 01:49:07.189467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.189475 1021094 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:49:07.189482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:49:07.189545 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:49:07.214553 1021094 cri.go:89] found id: ""
	I1208 01:49:07.214578 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.214586 1021094 logs.go:284] No container was found matching "etcd"
	I1208 01:49:07.214592 1021094 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:49:07.214652 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:49:07.240730 1021094 cri.go:89] found id: ""
	I1208 01:49:07.240765 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.240774 1021094 logs.go:284] No container was found matching "coredns"
	I1208 01:49:07.240780 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:49:07.240877 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:49:07.275951 1021094 cri.go:89] found id: ""
	I1208 01:49:07.275976 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.275984 1021094 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:49:07.275991 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:49:07.276048 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:49:07.308446 1021094 cri.go:89] found id: ""
	I1208 01:49:07.308467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.308476 1021094 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:49:07.308482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:49:07.308544 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:49:07.337708 1021094 cri.go:89] found id: ""
	I1208 01:49:07.337730 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.337738 1021094 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:49:07.337744 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:49:07.337804 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:49:07.365399 1021094 cri.go:89] found id: ""
	I1208 01:49:07.365420 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.365428 1021094 logs.go:284] No container was found matching "kindnet"
	I1208 01:49:07.365438 1021094 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:49:07.365449 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:49:07.429624 1021094 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:49:07.429646 1021094 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:49:07.429657 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:49:07.471772 1021094 logs.go:123] Gathering logs for container status ...
	I1208 01:49:07.471809 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:49:07.507231 1021094 logs.go:123] Gathering logs for kubelet ...
	I1208 01:49:07.507258 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:49:07.572140 1021094 logs.go:123] Gathering logs for dmesg ...
	I1208 01:49:07.572179 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:49:07.589992 1021094 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:49:07.590043 1021094 out.go:285] * 
	W1208 01:49:07.590093 1021094 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.590111 1021094 out.go:285] * 
	W1208 01:49:07.592441 1021094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:49:07.598676 1021094 out.go:203] 
	W1208 01:49:07.601501 1021094 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.601539 1021094 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:49:07.601583 1021094 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:49:07.604654 1021094 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368154779Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368198923Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.013925665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014099747Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014170156Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.265576665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.26604081Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.266101947Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338552201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338884118Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338939799Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.58396125Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fe4987d-fa68-4798-80d2-b6f670609a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.599048175Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=af362a5f-b1e8-40fc-9b9b-22ea72b61af9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.601243245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d8a1b229-d4f4-4c3b-92fb-098f8f0fb136 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.60654358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0bb15a41-3aee-43e0-bbf9-fda78b30c461 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.607953861Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=981c34d5-0cb0-4db8-9c75-23c9d8d2cd19 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.611594321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a9dea912-c284-4838-a031-472efe431421 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.615047193Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fa013a89-c419-4775-97ab-ba118f73c5bc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.415842018Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5cc098c-7f40-49e5-bba2-01599a22769f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.418814555Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b7a480df-c2a0-408a-8f62-dd9431b94efc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.420546135Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=58c6887c-b0c7-4eff-b873-b4f5e7c16d5e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.42189714Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6b9ba419-3d5e-487a-8468-75890c99582f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.422761051Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8de4c7bf-c80e-41eb-9a33-14c1fff856ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.424360027Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7157efaa-0bc0-4348-a5c6-374c01495c4a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.425327118Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2a907da-3366-4a83-862f-ce206ad44275 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:10.265569    5786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:10.266346    5786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:10.267947    5786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:10.268455    5786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:10.269677    5786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:49:10 up  6:31,  0 user,  load average: 0.60, 1.37, 1.83
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:08 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:08 no-preload-389831 kubelet[5666]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:08 no-preload-389831 kubelet[5666]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:08 no-preload-389831 kubelet[5666]: E1208 01:49:08.834341    5666 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:08 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:09 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 08 01:49:09 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:09 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: E1208 01:49:09.575414    5696 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:09 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:09 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: E1208 01:49:10.321392    5791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 6 (322.208819ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:10.696825 1044626 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:40:32.261581076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79193c30e8ff7cdcf99f747e987c12c0c02ab2d4b1e09c1f844845ffd7e244c8",
	            "SandboxKey": "/var/run/docker/netns/79193c30e8ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a7:b4:4f:0b:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "ac3963043985cb3c4beb5ad7f93727fc9a3cc524dd93131be5af0216706250c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 6 (361.939472ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:11.077692 1044704 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-661561 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                            │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:46:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:46:29.329866 1039943 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:29.330081 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330108 1039943 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:29.330126 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330385 1039943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:29.330823 1039943 out.go:368] Setting JSON to false
	I1208 01:46:29.331797 1039943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23322,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:46:29.331896 1039943 start.go:143] virtualization:  
	I1208 01:46:29.336178 1039943 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:46:29.339647 1039943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:46:29.339692 1039943 notify.go:221] Checking for updates...
	I1208 01:46:29.343070 1039943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:46:29.346748 1039943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:46:29.349908 1039943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:46:29.353489 1039943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:46:29.356725 1039943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:46:29.360434 1039943 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:29.360559 1039943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:46:29.382085 1039943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:46:29.382198 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.440774 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.431745879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.440872 1039943 docker.go:319] overlay module found
	I1208 01:46:29.444115 1039943 out.go:179] * Using the docker driver based on user configuration
	I1208 01:46:29.447050 1039943 start.go:309] selected driver: docker
	I1208 01:46:29.447088 1039943 start.go:927] validating driver "docker" against <nil>
	I1208 01:46:29.447103 1039943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:46:29.447822 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.513492 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.504737954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.513651 1039943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:46:29.513674 1039943 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:46:29.513890 1039943 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:46:29.517063 1039943 out.go:179] * Using Docker driver with root privileges
	I1208 01:46:29.519963 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:29.520039 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:29.520052 1039943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:46:29.520136 1039943 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:29.523357 1039943 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:46:29.526151 1039943 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:46:29.529015 1039943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:46:29.531940 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:29.532005 1039943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:46:29.532021 1039943 cache.go:65] Caching tarball of preloaded images
	I1208 01:46:29.532026 1039943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:46:29.532106 1039943 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:46:29.532117 1039943 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:46:29.532224 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:29.532242 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json: {Name:mk18f08541f75fcff1b0d7777fe02845efecf137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:29.551296 1039943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:46:29.551320 1039943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:46:29.551340 1039943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:46:29.551371 1039943 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:46:29.551480 1039943 start.go:364] duration metric: took 87.493µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:46:29.551523 1039943 start.go:93] Provisioning new machine with config: &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:46:29.551657 1039943 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:46:29.555023 1039943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:46:29.555251 1039943 start.go:159] libmachine.API.Create for "newest-cni-448023" (driver="docker")
	I1208 01:46:29.555289 1039943 client.go:173] LocalClient.Create starting
	I1208 01:46:29.555374 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:46:29.555413 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555432 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555492 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:46:29.555518 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555535 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555895 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:46:29.572337 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:46:29.572449 1039943 network_create.go:284] running [docker network inspect newest-cni-448023] to gather additional debugging logs...
	I1208 01:46:29.572473 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023
	W1208 01:46:29.587652 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 returned with exit code 1
	I1208 01:46:29.587681 1039943 network_create.go:287] error running [docker network inspect newest-cni-448023]: docker network inspect newest-cni-448023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-448023 not found
	I1208 01:46:29.587697 1039943 network_create.go:289] output of [docker network inspect newest-cni-448023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-448023 not found
	
	** /stderr **
	I1208 01:46:29.587791 1039943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:29.603250 1039943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:46:29.603598 1039943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:46:29.603957 1039943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:46:29.604235 1039943 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:46:29.604628 1039943 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6ec0}
	I1208 01:46:29.604652 1039943 network_create.go:124] attempt to create docker network newest-cni-448023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:46:29.604709 1039943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-448023 newest-cni-448023
	I1208 01:46:29.659267 1039943 network_create.go:108] docker network newest-cni-448023 192.168.85.0/24 created
	I1208 01:46:29.659307 1039943 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-448023" container
	I1208 01:46:29.659395 1039943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:46:29.675118 1039943 cli_runner.go:164] Run: docker volume create newest-cni-448023 --label name.minikube.sigs.k8s.io=newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:46:29.693502 1039943 oci.go:103] Successfully created a docker volume newest-cni-448023
	I1208 01:46:29.693603 1039943 cli_runner.go:164] Run: docker run --rm --name newest-cni-448023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --entrypoint /usr/bin/test -v newest-cni-448023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:46:30.260940 1039943 oci.go:107] Successfully prepared a docker volume newest-cni-448023
	I1208 01:46:30.261013 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:30.261031 1039943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:46:30.261099 1039943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:46:34.244465 1039943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983325366s)
	I1208 01:46:34.244500 1039943 kic.go:203] duration metric: took 3.983465364s to extract preloaded images to volume ...
	W1208 01:46:34.244633 1039943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:46:34.244781 1039943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:46:34.337950 1039943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-448023 --name newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-448023 --network newest-cni-448023 --ip 192.168.85.2 --volume newest-cni-448023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:46:34.625342 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Running}}
	I1208 01:46:34.649912 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.674400 1039943 cli_runner.go:164] Run: docker exec newest-cni-448023 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:46:34.723723 1039943 oci.go:144] the created container "newest-cni-448023" has a running status.
	I1208 01:46:34.723752 1039943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa...
	I1208 01:46:34.892140 1039943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:46:34.912965 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.938479 1039943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:46:34.938507 1039943 kic_runner.go:114] Args: [docker exec --privileged newest-cni-448023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:46:35.028018 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:35.058920 1039943 machine.go:94] provisionDockerMachine start ...
	I1208 01:46:35.059025 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:35.099088 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:35.099448 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:35.099466 1039943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:46:35.100020 1039943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47050->127.0.0.1:33807: read: connection reset by peer
	I1208 01:46:38.254334 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.254358 1039943 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:46:38.254421 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.272041 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.272365 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.272382 1039943 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:46:38.436500 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.436590 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.453974 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.454288 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.454304 1039943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:46:38.607227 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:46:38.607264 1039943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:46:38.607291 1039943 ubuntu.go:190] setting up certificates
	I1208 01:46:38.607301 1039943 provision.go:84] configureAuth start
	I1208 01:46:38.607362 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:38.623687 1039943 provision.go:143] copyHostCerts
	I1208 01:46:38.623751 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:46:38.623766 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:46:38.623843 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:46:38.623946 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:46:38.623958 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:46:38.623995 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:46:38.624062 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:46:38.624071 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:46:38.624096 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:46:38.624155 1039943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:46:38.807873 1039943 provision.go:177] copyRemoteCerts
	I1208 01:46:38.807949 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:46:38.808001 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.828753 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:38.934898 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:46:38.952864 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:46:38.970012 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:46:38.987418 1039943 provision.go:87] duration metric: took 380.093979ms to configureAuth
	I1208 01:46:38.987489 1039943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:46:38.987701 1039943 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:38.987812 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.021586 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:39.021916 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:39.021944 1039943 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:46:39.335041 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:46:39.335061 1039943 machine.go:97] duration metric: took 4.276119883s to provisionDockerMachine
	I1208 01:46:39.335070 1039943 client.go:176] duration metric: took 9.779771841s to LocalClient.Create
	I1208 01:46:39.335086 1039943 start.go:167] duration metric: took 9.779836023s to libmachine.API.Create "newest-cni-448023"
	I1208 01:46:39.335093 1039943 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:46:39.335105 1039943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:46:39.335174 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:46:39.335220 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.352266 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.458536 1039943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:46:39.461608 1039943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:46:39.461639 1039943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:46:39.461650 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:46:39.461705 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:46:39.461789 1039943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:46:39.461894 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:46:39.469247 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:39.486243 1039943 start.go:296] duration metric: took 151.134201ms for postStartSetup
	I1208 01:46:39.486633 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.504855 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:39.505123 1039943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:46:39.505164 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.523441 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.627950 1039943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:46:39.632598 1039943 start.go:128] duration metric: took 10.080925153s to createHost
	I1208 01:46:39.632621 1039943 start.go:83] releasing machines lock for "newest-cni-448023", held for 10.081126738s
	I1208 01:46:39.632691 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.652131 1039943 ssh_runner.go:195] Run: cat /version.json
	I1208 01:46:39.652157 1039943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:46:39.652183 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.652218 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.681809 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.682602 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.869694 1039943 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:39.876126 1039943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:46:39.913719 1039943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:46:39.918384 1039943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:46:39.918458 1039943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:46:39.947242 1039943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:46:39.947265 1039943 start.go:496] detecting cgroup driver to use...
	I1208 01:46:39.947298 1039943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:46:39.947349 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:46:39.965768 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:46:39.978168 1039943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:46:39.978234 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:46:39.995812 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:46:40.019051 1039943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:46:40.157466 1039943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:46:40.288788 1039943 docker.go:234] disabling docker service ...
	I1208 01:46:40.288897 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:46:40.314027 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:46:40.329209 1039943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:46:40.468296 1039943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:46:40.591028 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:46:40.604723 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:46:40.618613 1039943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:46:40.618699 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.627724 1039943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:46:40.627809 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.637292 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.646718 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.656124 1039943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:46:40.664289 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.672999 1039943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.686929 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.695637 1039943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:46:40.703116 1039943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:46:40.710332 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:40.834286 1039943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:46:41.006471 1039943 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:46:41.006581 1039943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:46:41.017809 1039943 start.go:564] Will wait 60s for crictl version
	I1208 01:46:41.017944 1039943 ssh_runner.go:195] Run: which crictl
	I1208 01:46:41.022606 1039943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:46:41.056937 1039943 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:46:41.057065 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.093495 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.124549 1039943 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:46:41.127395 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:41.143475 1039943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:46:41.147287 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.159892 1039943 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:46:41.162523 1039943 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:46:41.162667 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:41.162750 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.195193 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.195217 1039943 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:46:41.195275 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.220173 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.220196 1039943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:46:41.220203 1039943 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:46:41.220293 1039943 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:46:41.220379 1039943 ssh_runner.go:195] Run: crio config
	I1208 01:46:41.279892 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:41.279918 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:41.279934 1039943 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:46:41.279985 1039943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:46:41.280144 1039943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:46:41.280222 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:46:41.287843 1039943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:46:41.287924 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:46:41.295456 1039943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:46:41.308022 1039943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:46:41.324403 1039943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:46:41.337573 1039943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:46:41.341125 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.350760 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:41.469701 1039943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:46:41.486526 1039943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:46:41.486549 1039943 certs.go:195] generating shared ca certs ...
	I1208 01:46:41.486570 1039943 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.486758 1039943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:46:41.486827 1039943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:46:41.486867 1039943 certs.go:257] generating profile certs ...
	I1208 01:46:41.486942 1039943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:46:41.486953 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt with IP's: []
	I1208 01:46:41.756525 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt ...
	I1208 01:46:41.756551 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt: {Name:mk0603ae5124c088a63c1752061db6508bab22f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756725 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key ...
	I1208 01:46:41.756733 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key: {Name:mkca461b7eac0897c193e0836f61829f4e9d4b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756813 1039943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:46:41.756826 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:46:41.854144 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e ...
	I1208 01:46:41.854175 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e: {Name:mk808166fcccc166bf8bbe144226f9daaa100961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854378 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e ...
	I1208 01:46:41.854395 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e: {Name:mkad238fa32487b653b0a9f151377065f0951a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854489 1039943 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt
	I1208 01:46:41.854571 1039943 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key
	I1208 01:46:41.854631 1039943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:46:41.854650 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt with IP's: []
	I1208 01:46:42.097939 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt ...
	I1208 01:46:42.097979 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt: {Name:mk99d1d19a981d57bf4d12a2cb81e3e53a22a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098217 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key ...
	I1208 01:46:42.098235 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key: {Name:mk0c7b8d27fa7ac473db57ad4f3abf32e11a6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098441 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:46:42.098497 1039943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:46:42.098508 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:46:42.098536 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:46:42.098564 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:46:42.098594 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:46:42.098649 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:42.099505 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:46:42.123800 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:46:42.149931 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:46:42.172486 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:46:42.204182 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:46:42.225772 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:46:42.248373 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:46:42.277328 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:46:42.301927 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:46:42.325492 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:46:42.345377 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:46:42.363969 1039943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:46:42.376790 1039943 ssh_runner.go:195] Run: openssl version
	I1208 01:46:42.383055 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.390479 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:46:42.397965 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401796 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401919 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.443135 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:46:42.450626 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:46:42.458240 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.465745 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:46:42.473315 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477290 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477357 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.518810 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:46:42.527316 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:46:42.538286 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.547106 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:46:42.555430 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560073 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560165 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.601377 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.609019 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.616650 1039943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:46:42.620441 1039943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:46:42.620500 1039943 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:42.620585 1039943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:46:42.620649 1039943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:46:42.649932 1039943 cri.go:89] found id: ""
	I1208 01:46:42.650013 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:46:42.657890 1039943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:46:42.665577 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:46:42.665663 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:46:42.673380 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:46:42.673399 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:46:42.673455 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:46:42.681009 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:46:42.681082 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:46:42.688582 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:46:42.696709 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:46:42.696788 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:46:42.704191 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.711702 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:46:42.711814 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.719024 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:46:42.726923 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:46:42.727007 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:46:42.734562 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:46:42.771766 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:46:42.772014 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:46:42.846706 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:46:42.846791 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:46:42.846859 1039943 kubeadm.go:319] OS: Linux
	I1208 01:46:42.846914 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:46:42.846982 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:46:42.847042 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:46:42.847102 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:46:42.847163 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:46:42.847225 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:46:42.847283 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:46:42.847345 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:46:42.847396 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:46:42.914142 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:46:42.914273 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:46:42.914365 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:46:42.927340 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:46:42.933605 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:46:42.933772 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:46:42.933880 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:46:43.136966 1039943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:46:43.328738 1039943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:46:43.732500 1039943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:46:43.956866 1039943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:46:44.129125 1039943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:46:44.129375 1039943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.337195 1039943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:46:44.337494 1039943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.588532 1039943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:46:44.954533 1039943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:46:45.238719 1039943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:46:45.239782 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:46:45.718662 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:46:45.762985 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:46:46.020127 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:46:46.317772 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:46:46.545386 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:46:46.546080 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:46:46.549393 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:46:46.552921 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:46:46.553058 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:46:46.553140 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:46:46.553786 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:46:46.570986 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:46:46.571335 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:46:46.579342 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:46:46.579896 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:46:46.580195 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:46:46.716587 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:46:46.716716 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:49:07.156332 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001184239s
	I1208 01:49:07.156375 1021094 kubeadm.go:319] 
	I1208 01:49:07.156475 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:49:07.156683 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:49:07.156865 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:49:07.156875 1021094 kubeadm.go:319] 
	I1208 01:49:07.157056 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:49:07.157354 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:49:07.157410 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:49:07.157416 1021094 kubeadm.go:319] 
	I1208 01:49:07.162909 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:49:07.163434 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:49:07.163569 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:49:07.163832 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:49:07.163845 1021094 kubeadm.go:319] 
	I1208 01:49:07.163964 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:49:07.163990 1021094 kubeadm.go:403] duration metric: took 8m8.109200094s to StartCluster
	I1208 01:49:07.164030 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:49:07.164092 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:49:07.189444 1021094 cri.go:89] found id: ""
	I1208 01:49:07.189467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.189475 1021094 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:49:07.189482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:49:07.189545 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:49:07.214553 1021094 cri.go:89] found id: ""
	I1208 01:49:07.214578 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.214586 1021094 logs.go:284] No container was found matching "etcd"
	I1208 01:49:07.214592 1021094 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:49:07.214652 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:49:07.240730 1021094 cri.go:89] found id: ""
	I1208 01:49:07.240765 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.240774 1021094 logs.go:284] No container was found matching "coredns"
	I1208 01:49:07.240780 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:49:07.240877 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:49:07.275951 1021094 cri.go:89] found id: ""
	I1208 01:49:07.275976 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.275984 1021094 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:49:07.275991 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:49:07.276048 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:49:07.308446 1021094 cri.go:89] found id: ""
	I1208 01:49:07.308467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.308476 1021094 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:49:07.308482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:49:07.308544 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:49:07.337708 1021094 cri.go:89] found id: ""
	I1208 01:49:07.337730 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.337738 1021094 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:49:07.337744 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:49:07.337804 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:49:07.365399 1021094 cri.go:89] found id: ""
	I1208 01:49:07.365420 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.365428 1021094 logs.go:284] No container was found matching "kindnet"
	I1208 01:49:07.365438 1021094 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:49:07.365449 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:49:07.429624 1021094 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:49:07.429646 1021094 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:49:07.429657 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:49:07.471772 1021094 logs.go:123] Gathering logs for container status ...
	I1208 01:49:07.471809 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:49:07.507231 1021094 logs.go:123] Gathering logs for kubelet ...
	I1208 01:49:07.507258 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:49:07.572140 1021094 logs.go:123] Gathering logs for dmesg ...
	I1208 01:49:07.572179 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:49:07.589992 1021094 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:49:07.590043 1021094 out.go:285] * 
	W1208 01:49:07.590093 1021094 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.590111 1021094 out.go:285] * 
	W1208 01:49:07.592441 1021094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:49:07.598676 1021094 out.go:203] 
	W1208 01:49:07.601501 1021094 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.601539 1021094 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:49:07.601583 1021094 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:49:07.604654 1021094 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368154779Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368198923Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.013925665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014099747Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014170156Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.265576665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.26604081Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.266101947Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338552201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338884118Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338939799Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.58396125Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fe4987d-fa68-4798-80d2-b6f670609a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.599048175Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=af362a5f-b1e8-40fc-9b9b-22ea72b61af9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.601243245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d8a1b229-d4f4-4c3b-92fb-098f8f0fb136 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.60654358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0bb15a41-3aee-43e0-bbf9-fda78b30c461 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.607953861Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=981c34d5-0cb0-4db8-9c75-23c9d8d2cd19 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.611594321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a9dea912-c284-4838-a031-472efe431421 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.615047193Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fa013a89-c419-4775-97ab-ba118f73c5bc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.415842018Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5cc098c-7f40-49e5-bba2-01599a22769f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.418814555Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b7a480df-c2a0-408a-8f62-dd9431b94efc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.420546135Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=58c6887c-b0c7-4eff-b873-b4f5e7c16d5e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.42189714Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6b9ba419-3d5e-487a-8468-75890c99582f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.422761051Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8de4c7bf-c80e-41eb-9a33-14c1fff856ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.424360027Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7157efaa-0bc0-4348-a5c6-374c01495c4a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.425327118Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2a907da-3366-4a83-862f-ce206ad44275 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:11.705677    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:11.706522    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:11.707677    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:11.708384    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:11.709973    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:49:11 up  6:31,  0 user,  load average: 0.60, 1.37, 1.83
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:49:09 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:09 no-preload-389831 kubelet[5696]: E1208 01:49:09.575414    5696 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:09 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:09 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:10 no-preload-389831 kubelet[5791]: E1208 01:49:10.321392    5791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:10 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:11 no-preload-389831 kubelet[5834]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:11 no-preload-389831 kubelet[5834]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:49:11 no-preload-389831 kubelet[5834]: E1208 01:49:11.056476    5834 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:49:11 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:49:11 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:49:11 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 08 01:49:11 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:49:11 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 6 (338.537328ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:49:12.158457 1044933 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (96.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1208 01:49:19.635543  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.721873  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.728382  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.739883  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.761388  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.802834  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:52.884310  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:53.045870  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:53.367466  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:54.009618  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:55.291095  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:57.852467  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:50:02.974596  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:50:13.216082  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:50:33.697981  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:50:34.379568  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:50:45.329229  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.577251656s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-389831 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-389831 describe deploy/metrics-server -n kube-system: exit status 1 (65.533457ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-389831" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-389831 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:40:32.261581076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79193c30e8ff7cdcf99f747e987c12c0c02ab2d4b1e09c1f844845ffd7e244c8",
	            "SandboxKey": "/var/run/docker/netns/79193c30e8ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:a7:b4:4f:0b:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "ac3963043985cb3c4beb5ad7f93727fc9a3cc524dd93131be5af0216706250c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 6 (357.592179ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:50:47.176251 1046591 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p cert-expiration-428091                                                                                                                                                                                                                            │ cert-expiration-428091       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ delete  │ -p old-k8s-version-661561                                                                                                                                                                                                                            │ old-k8s-version-661561       │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:40 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │                     │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:46:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:46:29.329866 1039943 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:46:29.330081 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330108 1039943 out.go:374] Setting ErrFile to fd 2...
	I1208 01:46:29.330126 1039943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:46:29.330385 1039943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:46:29.330823 1039943 out.go:368] Setting JSON to false
	I1208 01:46:29.331797 1039943 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23322,"bootTime":1765135068,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:46:29.331896 1039943 start.go:143] virtualization:  
	I1208 01:46:29.336178 1039943 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:46:29.339647 1039943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:46:29.339692 1039943 notify.go:221] Checking for updates...
	I1208 01:46:29.343070 1039943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:46:29.346748 1039943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:46:29.349908 1039943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:46:29.353489 1039943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:46:29.356725 1039943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:46:29.360434 1039943 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:29.360559 1039943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:46:29.382085 1039943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:46:29.382198 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.440774 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.431745879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.440872 1039943 docker.go:319] overlay module found
	I1208 01:46:29.444115 1039943 out.go:179] * Using the docker driver based on user configuration
	I1208 01:46:29.447050 1039943 start.go:309] selected driver: docker
	I1208 01:46:29.447088 1039943 start.go:927] validating driver "docker" against <nil>
	I1208 01:46:29.447103 1039943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:46:29.447822 1039943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:46:29.513492 1039943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:46:29.504737954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:46:29.513651 1039943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:46:29.513674 1039943 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:46:29.513890 1039943 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:46:29.517063 1039943 out.go:179] * Using Docker driver with root privileges
	I1208 01:46:29.519963 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:29.520039 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:29.520052 1039943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:46:29.520136 1039943 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:29.523357 1039943 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:46:29.526151 1039943 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:46:29.529015 1039943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:46:29.531940 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:29.532005 1039943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:46:29.532021 1039943 cache.go:65] Caching tarball of preloaded images
	I1208 01:46:29.532026 1039943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:46:29.532106 1039943 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:46:29.532117 1039943 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:46:29.532224 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:29.532242 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json: {Name:mk18f08541f75fcff1b0d7777fe02845efecf137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:29.551296 1039943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:46:29.551320 1039943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:46:29.551340 1039943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:46:29.551371 1039943 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:46:29.551480 1039943 start.go:364] duration metric: took 87.493µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:46:29.551523 1039943 start.go:93] Provisioning new machine with config: &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:46:29.551657 1039943 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:46:29.555023 1039943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:46:29.555251 1039943 start.go:159] libmachine.API.Create for "newest-cni-448023" (driver="docker")
	I1208 01:46:29.555289 1039943 client.go:173] LocalClient.Create starting
	I1208 01:46:29.555374 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 01:46:29.555413 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555432 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555492 1039943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 01:46:29.555518 1039943 main.go:143] libmachine: Decoding PEM data...
	I1208 01:46:29.555535 1039943 main.go:143] libmachine: Parsing certificate...
	I1208 01:46:29.555895 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:46:29.572337 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:46:29.572449 1039943 network_create.go:284] running [docker network inspect newest-cni-448023] to gather additional debugging logs...
	I1208 01:46:29.572473 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023
	W1208 01:46:29.587652 1039943 cli_runner.go:211] docker network inspect newest-cni-448023 returned with exit code 1
	I1208 01:46:29.587681 1039943 network_create.go:287] error running [docker network inspect newest-cni-448023]: docker network inspect newest-cni-448023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-448023 not found
	I1208 01:46:29.587697 1039943 network_create.go:289] output of [docker network inspect newest-cni-448023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-448023 not found
	
	** /stderr **
	I1208 01:46:29.587791 1039943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:29.603250 1039943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 01:46:29.603598 1039943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 01:46:29.603957 1039943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 01:46:29.604235 1039943 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 01:46:29.604628 1039943 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6ec0}
	I1208 01:46:29.604652 1039943 network_create.go:124] attempt to create docker network newest-cni-448023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:46:29.604709 1039943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-448023 newest-cni-448023
	I1208 01:46:29.659267 1039943 network_create.go:108] docker network newest-cni-448023 192.168.85.0/24 created
	I1208 01:46:29.659307 1039943 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-448023" container
	I1208 01:46:29.659395 1039943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:46:29.675118 1039943 cli_runner.go:164] Run: docker volume create newest-cni-448023 --label name.minikube.sigs.k8s.io=newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:46:29.693502 1039943 oci.go:103] Successfully created a docker volume newest-cni-448023
	I1208 01:46:29.693603 1039943 cli_runner.go:164] Run: docker run --rm --name newest-cni-448023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --entrypoint /usr/bin/test -v newest-cni-448023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:46:30.260940 1039943 oci.go:107] Successfully prepared a docker volume newest-cni-448023
	I1208 01:46:30.261013 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:30.261031 1039943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:46:30.261099 1039943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:46:34.244465 1039943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-448023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.983325366s)
	I1208 01:46:34.244500 1039943 kic.go:203] duration metric: took 3.983465364s to extract preloaded images to volume ...
	W1208 01:46:34.244633 1039943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:46:34.244781 1039943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:46:34.337950 1039943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-448023 --name newest-cni-448023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-448023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-448023 --network newest-cni-448023 --ip 192.168.85.2 --volume newest-cni-448023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:46:34.625342 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Running}}
	I1208 01:46:34.649912 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.674400 1039943 cli_runner.go:164] Run: docker exec newest-cni-448023 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:46:34.723723 1039943 oci.go:144] the created container "newest-cni-448023" has a running status.
	I1208 01:46:34.723752 1039943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa...
	I1208 01:46:34.892140 1039943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:46:34.912965 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:34.938479 1039943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:46:34.938507 1039943 kic_runner.go:114] Args: [docker exec --privileged newest-cni-448023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:46:35.028018 1039943 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:46:35.058920 1039943 machine.go:94] provisionDockerMachine start ...
	I1208 01:46:35.059025 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:35.099088 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:35.099448 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:35.099466 1039943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:46:35.100020 1039943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47050->127.0.0.1:33807: read: connection reset by peer
	I1208 01:46:38.254334 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.254358 1039943 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:46:38.254421 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.272041 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.272365 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.272382 1039943 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:46:38.436500 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:46:38.436590 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.453974 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:38.454288 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:38.454304 1039943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:46:38.607227 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:46:38.607264 1039943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:46:38.607291 1039943 ubuntu.go:190] setting up certificates
	I1208 01:46:38.607301 1039943 provision.go:84] configureAuth start
	I1208 01:46:38.607362 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:38.623687 1039943 provision.go:143] copyHostCerts
	I1208 01:46:38.623751 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:46:38.623766 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:46:38.623843 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:46:38.623946 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:46:38.623958 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:46:38.623995 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:46:38.624062 1039943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:46:38.624071 1039943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:46:38.624096 1039943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:46:38.624155 1039943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:46:38.807873 1039943 provision.go:177] copyRemoteCerts
	I1208 01:46:38.807949 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:46:38.808001 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:38.828753 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:38.934898 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:46:38.952864 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:46:38.970012 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:46:38.987418 1039943 provision.go:87] duration metric: took 380.093979ms to configureAuth
	I1208 01:46:38.987489 1039943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:46:38.987701 1039943 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:46:38.987812 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.021586 1039943 main.go:143] libmachine: Using SSH client type: native
	I1208 01:46:39.021916 1039943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1208 01:46:39.021944 1039943 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:46:39.335041 1039943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:46:39.335061 1039943 machine.go:97] duration metric: took 4.276119883s to provisionDockerMachine
	I1208 01:46:39.335070 1039943 client.go:176] duration metric: took 9.779771841s to LocalClient.Create
	I1208 01:46:39.335086 1039943 start.go:167] duration metric: took 9.779836023s to libmachine.API.Create "newest-cni-448023"
	I1208 01:46:39.335093 1039943 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:46:39.335105 1039943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:46:39.335174 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:46:39.335220 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.352266 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.458536 1039943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:46:39.461608 1039943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:46:39.461639 1039943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:46:39.461650 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:46:39.461705 1039943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:46:39.461789 1039943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:46:39.461894 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:46:39.469247 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:39.486243 1039943 start.go:296] duration metric: took 151.134201ms for postStartSetup
	I1208 01:46:39.486633 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.504855 1039943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:46:39.505123 1039943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:46:39.505164 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.523441 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.627950 1039943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:46:39.632598 1039943 start.go:128] duration metric: took 10.080925153s to createHost
	I1208 01:46:39.632621 1039943 start.go:83] releasing machines lock for "newest-cni-448023", held for 10.081126738s
	I1208 01:46:39.632691 1039943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:46:39.652131 1039943 ssh_runner.go:195] Run: cat /version.json
	I1208 01:46:39.652157 1039943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:46:39.652183 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.652218 1039943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:46:39.681809 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.682602 1039943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:46:39.869694 1039943 ssh_runner.go:195] Run: systemctl --version
	I1208 01:46:39.876126 1039943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:46:39.913719 1039943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:46:39.918384 1039943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:46:39.918458 1039943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:46:39.947242 1039943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:46:39.947265 1039943 start.go:496] detecting cgroup driver to use...
	I1208 01:46:39.947298 1039943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:46:39.947349 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:46:39.965768 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:46:39.978168 1039943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:46:39.978234 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:46:39.995812 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:46:40.019051 1039943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:46:40.157466 1039943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:46:40.288788 1039943 docker.go:234] disabling docker service ...
	I1208 01:46:40.288897 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:46:40.314027 1039943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:46:40.329209 1039943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:46:40.468296 1039943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:46:40.591028 1039943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:46:40.604723 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:46:40.618613 1039943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:46:40.618699 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.627724 1039943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:46:40.627809 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.637292 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.646718 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.656124 1039943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:46:40.664289 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.672999 1039943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.686929 1039943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:46:40.695637 1039943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:46:40.703116 1039943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:46:40.710332 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:40.834286 1039943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:46:41.006471 1039943 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:46:41.006581 1039943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:46:41.017809 1039943 start.go:564] Will wait 60s for crictl version
	I1208 01:46:41.017944 1039943 ssh_runner.go:195] Run: which crictl
	I1208 01:46:41.022606 1039943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:46:41.056937 1039943 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:46:41.057065 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.093495 1039943 ssh_runner.go:195] Run: crio --version
	I1208 01:46:41.124549 1039943 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:46:41.127395 1039943 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:46:41.143475 1039943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:46:41.147287 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.159892 1039943 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:46:41.162523 1039943 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:46:41.162667 1039943 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:46:41.162750 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.195193 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.195217 1039943 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:46:41.195275 1039943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:46:41.220173 1039943 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:46:41.220196 1039943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:46:41.220203 1039943 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:46:41.220293 1039943 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:46:41.220379 1039943 ssh_runner.go:195] Run: crio config
	I1208 01:46:41.279892 1039943 cni.go:84] Creating CNI manager for ""
	I1208 01:46:41.279918 1039943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:46:41.279934 1039943 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:46:41.279985 1039943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:46:41.280144 1039943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:46:41.280222 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:46:41.287843 1039943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:46:41.287924 1039943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:46:41.295456 1039943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:46:41.308022 1039943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:46:41.324403 1039943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:46:41.337573 1039943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:46:41.341125 1039943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:46:41.350760 1039943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:46:41.469701 1039943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:46:41.486526 1039943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:46:41.486549 1039943 certs.go:195] generating shared ca certs ...
	I1208 01:46:41.486570 1039943 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.486758 1039943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:46:41.486827 1039943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:46:41.486867 1039943 certs.go:257] generating profile certs ...
	I1208 01:46:41.486942 1039943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:46:41.486953 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt with IP's: []
	I1208 01:46:41.756525 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt ...
	I1208 01:46:41.756551 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.crt: {Name:mk0603ae5124c088a63c1752061db6508bab22f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756725 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key ...
	I1208 01:46:41.756733 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key: {Name:mkca461b7eac0897c193e0836f61829f4e9d4b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.756813 1039943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:46:41.756826 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:46:41.854144 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e ...
	I1208 01:46:41.854175 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e: {Name:mk808166fcccc166bf8bbe144226f9daaa100961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854378 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e ...
	I1208 01:46:41.854395 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e: {Name:mkad238fa32487b653b0a9f151377065f0951a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:41.854489 1039943 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt
	I1208 01:46:41.854571 1039943 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key
	I1208 01:46:41.854631 1039943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:46:41.854650 1039943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt with IP's: []
	I1208 01:46:42.097939 1039943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt ...
	I1208 01:46:42.097979 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt: {Name:mk99d1d19a981d57bf4d12a2cb81e3e53a22a71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098217 1039943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key ...
	I1208 01:46:42.098235 1039943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key: {Name:mk0c7b8d27fa7ac473db57ad4f3abf32e11a6cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:46:42.098441 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:46:42.098497 1039943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:46:42.098508 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:46:42.098536 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:46:42.098564 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:46:42.098594 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:46:42.098649 1039943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:46:42.099505 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:46:42.123800 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:46:42.149931 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:46:42.172486 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:46:42.204182 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:46:42.225772 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:46:42.248373 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:46:42.277328 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:46:42.301927 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:46:42.325492 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:46:42.345377 1039943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:46:42.363969 1039943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:46:42.376790 1039943 ssh_runner.go:195] Run: openssl version
	I1208 01:46:42.383055 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.390479 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:46:42.397965 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401796 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.401919 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:46:42.443135 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:46:42.450626 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:46:42.458240 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.465745 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:46:42.473315 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477290 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.477357 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:46:42.518810 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:46:42.527316 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 01:46:42.538286 1039943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.547106 1039943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:46:42.555430 1039943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560073 1039943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.560165 1039943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:46:42.601377 1039943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.609019 1039943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:46:42.616650 1039943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:46:42.620441 1039943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:46:42.620500 1039943 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:46:42.620585 1039943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:46:42.620649 1039943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:46:42.649932 1039943 cri.go:89] found id: ""
	I1208 01:46:42.650013 1039943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:46:42.657890 1039943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:46:42.665577 1039943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:46:42.665663 1039943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:46:42.673380 1039943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:46:42.673399 1039943 kubeadm.go:158] found existing configuration files:
	
	I1208 01:46:42.673455 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:46:42.681009 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:46:42.681082 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:46:42.688582 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:46:42.696709 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:46:42.696788 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:46:42.704191 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.711702 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:46:42.711814 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:46:42.719024 1039943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:46:42.726923 1039943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:46:42.727007 1039943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:46:42.734562 1039943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:46:42.771766 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:46:42.772014 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:46:42.846706 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:46:42.846791 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:46:42.846859 1039943 kubeadm.go:319] OS: Linux
	I1208 01:46:42.846914 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:46:42.846982 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:46:42.847042 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:46:42.847102 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:46:42.847163 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:46:42.847225 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:46:42.847283 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:46:42.847345 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:46:42.847396 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:46:42.914142 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:46:42.914273 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:46:42.914365 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:46:42.927340 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:46:42.933605 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:46:42.933772 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:46:42.933880 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:46:43.136966 1039943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:46:43.328738 1039943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:46:43.732500 1039943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:46:43.956866 1039943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:46:44.129125 1039943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:46:44.129375 1039943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.337195 1039943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:46:44.337494 1039943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-448023] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:46:44.588532 1039943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:46:44.954533 1039943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:46:45.238719 1039943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:46:45.239782 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:46:45.718662 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:46:45.762985 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:46:46.020127 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:46:46.317772 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:46:46.545386 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:46:46.546080 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:46:46.549393 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:46:46.552921 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:46:46.553058 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:46:46.553140 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:46:46.553786 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:46:46.570986 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:46:46.571335 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:46:46.579342 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:46:46.579896 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:46:46.580195 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:46:46.716587 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:46:46.716716 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:49:07.156332 1021094 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001184239s
	I1208 01:49:07.156375 1021094 kubeadm.go:319] 
	I1208 01:49:07.156475 1021094 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:49:07.156683 1021094 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:49:07.156865 1021094 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:49:07.156875 1021094 kubeadm.go:319] 
	I1208 01:49:07.157056 1021094 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:49:07.157354 1021094 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:49:07.157410 1021094 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:49:07.157416 1021094 kubeadm.go:319] 
	I1208 01:49:07.162909 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:49:07.163434 1021094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:49:07.163569 1021094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:49:07.163832 1021094 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:49:07.163845 1021094 kubeadm.go:319] 
	I1208 01:49:07.163964 1021094 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:49:07.163990 1021094 kubeadm.go:403] duration metric: took 8m8.109200094s to StartCluster
	I1208 01:49:07.164030 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:49:07.164092 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:49:07.189444 1021094 cri.go:89] found id: ""
	I1208 01:49:07.189467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.189475 1021094 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:49:07.189482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:49:07.189545 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:49:07.214553 1021094 cri.go:89] found id: ""
	I1208 01:49:07.214578 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.214586 1021094 logs.go:284] No container was found matching "etcd"
	I1208 01:49:07.214592 1021094 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:49:07.214652 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:49:07.240730 1021094 cri.go:89] found id: ""
	I1208 01:49:07.240765 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.240774 1021094 logs.go:284] No container was found matching "coredns"
	I1208 01:49:07.240780 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:49:07.240877 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:49:07.275951 1021094 cri.go:89] found id: ""
	I1208 01:49:07.275976 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.275984 1021094 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:49:07.275991 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:49:07.276048 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:49:07.308446 1021094 cri.go:89] found id: ""
	I1208 01:49:07.308467 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.308476 1021094 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:49:07.308482 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:49:07.308544 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:49:07.337708 1021094 cri.go:89] found id: ""
	I1208 01:49:07.337730 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.337738 1021094 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:49:07.337744 1021094 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:49:07.337804 1021094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:49:07.365399 1021094 cri.go:89] found id: ""
	I1208 01:49:07.365420 1021094 logs.go:282] 0 containers: []
	W1208 01:49:07.365428 1021094 logs.go:284] No container was found matching "kindnet"
	I1208 01:49:07.365438 1021094 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:49:07.365449 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:49:07.429624 1021094 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:49:07.421699    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.422381    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.423965    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.424428    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:49:07.426094    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:49:07.429646 1021094 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:49:07.429657 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:49:07.471772 1021094 logs.go:123] Gathering logs for container status ...
	I1208 01:49:07.471809 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:49:07.507231 1021094 logs.go:123] Gathering logs for kubelet ...
	I1208 01:49:07.507258 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:49:07.572140 1021094 logs.go:123] Gathering logs for dmesg ...
	I1208 01:49:07.572179 1021094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:49:07.589992 1021094 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:49:07.590043 1021094 out.go:285] * 
	W1208 01:49:07.590093 1021094 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.590111 1021094 out.go:285] * 
	W1208 01:49:07.592441 1021094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:49:07.598676 1021094 out.go:203] 
	W1208 01:49:07.601501 1021094 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001184239s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:49:07.601539 1021094 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:49:07.601583 1021094 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:49:07.604654 1021094 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368154779Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:42 no-preload-389831 crio[837]: time="2025-12-08T01:40:42.368198923Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=221101b1-c8a1-4f9f-858c-46cf6c2d1139 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.013925665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014099747Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.014170156Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=0af3e1ff-69df-443d-a989-323cd8d47fbe name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.265576665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.26604081Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:43 no-preload-389831 crio[837]: time="2025-12-08T01:40:43.266101947Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=f8061e4b-fd1d-4a35-92e1-f12eb9ed666a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338552201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338884118Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:45 no-preload-389831 crio[837]: time="2025-12-08T01:40:45.338939799Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=36c8ed33-a52c-4b7d-bae0-37b7ec176b3f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.58396125Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fe4987d-fa68-4798-80d2-b6f670609a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.599048175Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=af362a5f-b1e8-40fc-9b9b-22ea72b61af9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.601243245Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d8a1b229-d4f4-4c3b-92fb-098f8f0fb136 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.60654358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0bb15a41-3aee-43e0-bbf9-fda78b30c461 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.607953861Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=981c34d5-0cb0-4db8-9c75-23c9d8d2cd19 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.611594321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a9dea912-c284-4838-a031-472efe431421 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:40:59 no-preload-389831 crio[837]: time="2025-12-08T01:40:59.615047193Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=fa013a89-c419-4775-97ab-ba118f73c5bc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.415842018Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c5cc098c-7f40-49e5-bba2-01599a22769f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.418814555Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b7a480df-c2a0-408a-8f62-dd9431b94efc name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.420546135Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=58c6887c-b0c7-4eff-b873-b4f5e7c16d5e name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.42189714Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6b9ba419-3d5e-487a-8468-75890c99582f name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.422761051Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8de4c7bf-c80e-41eb-9a33-14c1fff856ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.424360027Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7157efaa-0bc0-4348-a5c6-374c01495c4a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:45:05 no-preload-389831 crio[837]: time="2025-12-08T01:45:05.425327118Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d2a907da-3366-4a83-862f-ce206ad44275 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:50:48.025243    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:50:48.026247    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:50:48.028111    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:50:48.028483    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:50:48.030138    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:50:48 up  6:32,  0 user,  load average: 0.22, 1.02, 1.65
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:50:45 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:50:46 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 452.
	Dec 08 01:50:46 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:46 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:46 no-preload-389831 kubelet[6769]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:46 no-preload-389831 kubelet[6769]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:46 no-preload-389831 kubelet[6769]: E1208 01:50:46.332391    6769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:50:46 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:50:46 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:47 no-preload-389831 kubelet[6781]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:47 no-preload-389831 kubelet[6781]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:47 no-preload-389831 kubelet[6781]: E1208 01:50:47.083298    6781 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:47 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:50:47 no-preload-389831 kubelet[6842]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:47 no-preload-389831 kubelet[6842]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:50:47 no-preload-389831 kubelet[6842]: E1208 01:50:47.884275    6842 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:50:47 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 6 (455.344608ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:50:48.651604 1046852 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (96.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 01:51:14.659303  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:52:36.580655  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:52:46.336437  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:53:51.933759  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.805016686s)

                                                
                                                
-- stdout --
	* [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:50:50.286498 1047159 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:50:50.286662 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.286699 1047159 out.go:374] Setting ErrFile to fd 2...
	I1208 01:50:50.286711 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.287030 1047159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:50:50.287411 1047159 out.go:368] Setting JSON to false
	I1208 01:50:50.288307 1047159 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23583,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:50:50.288377 1047159 start.go:143] virtualization:  
	I1208 01:50:50.291362 1047159 out.go:179] * [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:50:50.295297 1047159 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:50:50.295435 1047159 notify.go:221] Checking for updates...
	I1208 01:50:50.301190 1047159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:50:50.304190 1047159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:50.307152 1047159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:50:50.310056 1047159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:50:50.312896 1047159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:50:50.316257 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:50.316883 1047159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:50:50.344630 1047159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:50:50.344748 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.404071 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.394347428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.404175 1047159 docker.go:319] overlay module found
	I1208 01:50:50.409385 1047159 out.go:179] * Using the docker driver based on existing profile
	I1208 01:50:50.412316 1047159 start.go:309] selected driver: docker
	I1208 01:50:50.412334 1047159 start.go:927] validating driver "docker" against &{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.412446 1047159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:50:50.413148 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.469330 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.460395311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.469668 1047159 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:50:50.469703 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:50.469766 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:50.469803 1047159 start.go:353] cluster config:
	{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.473111 1047159 out.go:179] * Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	I1208 01:50:50.475845 1047159 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:50:50.478645 1047159 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:50:50.481298 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:50.481363 1047159 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:50:50.481427 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.481703 1047159 cache.go:107] acquiring lock: {Name:mkb488f77623cf5688783098c8af8f37e2ccf2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481784 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:50:50.481800 1047159 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.113µs
	I1208 01:50:50.481812 1047159 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:50:50.481825 1047159 cache.go:107] acquiring lock: {Name:mk46c5b5a799bb57ec4fc052703439a88454d6c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481854 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:50:50.481859 1047159 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.513µs
	I1208 01:50:50.481865 1047159 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481874 1047159 cache.go:107] acquiring lock: {Name:mkd948fd592ac79c85c21b030b5344321f29366e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481904 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:50:50.481909 1047159 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.783µs
	I1208 01:50:50.481915 1047159 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481925 1047159 cache.go:107] acquiring lock: {Name:mk937612bf3f3168a18ddaac7a61a8bae665cda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481950 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:50:50.481956 1047159 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.206µs
	I1208 01:50:50.481962 1047159 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481970 1047159 cache.go:107] acquiring lock: {Name:mk12ceb359422aeb489a7c1f33a7ec5ed809694f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481994 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:50:50.481999 1047159 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.179µs
	I1208 01:50:50.482005 1047159 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:50:50.482018 1047159 cache.go:107] acquiring lock: {Name:mk26da6a2fb489baaddcecf1a83cf045eefe1b48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482042 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:50:50.482047 1047159 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.37µs
	I1208 01:50:50.482052 1047159 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:50:50.482061 1047159 cache.go:107] acquiring lock: {Name:mk855f3a105742255ca91bc6cacb964e2740cdc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482085 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:50:50.482090 1047159 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 30.138µs
	I1208 01:50:50.482095 1047159 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:50:50.482104 1047159 cache.go:107] acquiring lock: {Name:mk695dd8e1a707c0142f2b3898e789d03306fcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482128 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:50:50.482132 1047159 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.187µs
	I1208 01:50:50.482138 1047159 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:50:50.482143 1047159 cache.go:87] Successfully saved all images to host disk.
	I1208 01:50:50.501174 1047159 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:50:50.501198 1047159 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:50:50.501214 1047159 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:50:50.501245 1047159 start.go:360] acquireMachinesLock for no-preload-389831: {Name:mkc005fe96402610ac376caa09ffa5218e546ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.501307 1047159 start.go:364] duration metric: took 39.935µs to acquireMachinesLock for "no-preload-389831"
	I1208 01:50:50.501330 1047159 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:50:50.501339 1047159 fix.go:54] fixHost starting: 
	I1208 01:50:50.501613 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.517984 1047159 fix.go:112] recreateIfNeeded on no-preload-389831: state=Stopped err=<nil>
	W1208 01:50:50.518022 1047159 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:50:50.521355 1047159 out.go:252] * Restarting existing docker container for "no-preload-389831" ...
	I1208 01:50:50.521454 1047159 cli_runner.go:164] Run: docker start no-preload-389831
	I1208 01:50:50.809627 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.833454 1047159 kic.go:430] container "no-preload-389831" state is running.
	I1208 01:50:50.833842 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:50.859105 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.859482 1047159 machine.go:94] provisionDockerMachine start ...
	I1208 01:50:50.859658 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:50.883035 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:50.883401 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:50.883410 1047159 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:50:50.884458 1047159 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37578->127.0.0.1:33812: read: connection reset by peer
	I1208 01:50:54.042538 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.042564 1047159 ubuntu.go:182] provisioning hostname "no-preload-389831"
	I1208 01:50:54.042629 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.060212 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.060523 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.060540 1047159 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-389831 && echo "no-preload-389831" | sudo tee /etc/hostname
	I1208 01:50:54.224761 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.224878 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.243516 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.243871 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.243894 1047159 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-389831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-389831/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-389831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:50:54.395341 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:50:54.395369 1047159 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:50:54.395395 1047159 ubuntu.go:190] setting up certificates
	I1208 01:50:54.395406 1047159 provision.go:84] configureAuth start
	I1208 01:50:54.395468 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:54.414388 1047159 provision.go:143] copyHostCerts
	I1208 01:50:54.414467 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:50:54.414483 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:50:54.414563 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:50:54.414673 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:50:54.414678 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:50:54.414706 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:50:54.414764 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:50:54.414768 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:50:54.414791 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:50:54.414925 1047159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.no-preload-389831 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-389831]
	I1208 01:50:55.069511 1047159 provision.go:177] copyRemoteCerts
	I1208 01:50:55.069604 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:50:55.069660 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.089775 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.199796 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:50:55.220330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:50:55.238828 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:50:55.258151 1047159 provision.go:87] duration metric: took 862.724063ms to configureAuth
	I1208 01:50:55.258179 1047159 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:50:55.258429 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:55.258562 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.279199 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:55.279708 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:55.279744 1047159 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:50:55.579255 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:50:55.579319 1047159 machine.go:97] duration metric: took 4.719823255s to provisionDockerMachine
	I1208 01:50:55.579345 1047159 start.go:293] postStartSetup for "no-preload-389831" (driver="docker")
	I1208 01:50:55.579373 1047159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:50:55.579468 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:50:55.579542 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.598239 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.702980 1047159 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:50:55.706389 1047159 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:50:55.706419 1047159 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:50:55.706430 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:50:55.706488 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:50:55.706577 1047159 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:50:55.706694 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:50:55.715414 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:55.732970 1047159 start.go:296] duration metric: took 153.595815ms for postStartSetup
	I1208 01:50:55.733056 1047159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:50:55.733110 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.750836 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.851895 1047159 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:50:55.856695 1047159 fix.go:56] duration metric: took 5.355347948s for fixHost
	I1208 01:50:55.856722 1047159 start.go:83] releasing machines lock for "no-preload-389831", held for 5.355403564s
	I1208 01:50:55.856804 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:55.873802 1047159 ssh_runner.go:195] Run: cat /version.json
	I1208 01:50:55.873860 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.874134 1047159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:50:55.874190 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.891794 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.904440 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:56.005327 1047159 ssh_runner.go:195] Run: systemctl --version
	I1208 01:50:56.106979 1047159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:50:56.144384 1047159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:50:56.149115 1047159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:50:56.149201 1047159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:50:56.157950 1047159 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:50:56.157977 1047159 start.go:496] detecting cgroup driver to use...
	I1208 01:50:56.158056 1047159 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:50:56.158131 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:50:56.173988 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:50:56.188154 1047159 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:50:56.188221 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:50:56.204007 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:50:56.217383 1047159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:50:56.340458 1047159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:50:56.458244 1047159 docker.go:234] disabling docker service ...
	I1208 01:50:56.458372 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:50:56.474961 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:50:56.487941 1047159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:50:56.612532 1047159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:50:56.731416 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:50:56.744122 1047159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:50:56.762363 1047159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:50:56.762429 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.772958 1047159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:50:56.773032 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.782289 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.793260 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.807215 1047159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:50:56.816828 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.826522 1047159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.835623 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.845020 1047159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:50:56.852794 1047159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:50:56.860249 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:56.972942 1047159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:50:57.131014 1047159 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:50:57.131096 1047159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:50:57.134814 1047159 start.go:564] Will wait 60s for crictl version
	I1208 01:50:57.134930 1047159 ssh_runner.go:195] Run: which crictl
	I1208 01:50:57.138347 1047159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:50:57.164245 1047159 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:50:57.164384 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.192737 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.223842 1047159 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:50:57.226769 1047159 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:50:57.243362 1047159 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:50:57.247217 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.257235 1047159 kubeadm.go:884] updating cluster {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:50:57.257353 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:57.257396 1047159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:50:57.289126 1047159 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:50:57.289152 1047159 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:50:57.289160 1047159 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:50:57.289257 1047159 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-389831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:50:57.289336 1047159 ssh_runner.go:195] Run: crio config
	I1208 01:50:57.362376 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:57.362445 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:57.362479 1047159 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:50:57.362529 1047159 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-389831 NodeName:no-preload-389831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:50:57.362701 1047159 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-389831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:50:57.362790 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:50:57.370735 1047159 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:50:57.370804 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:50:57.378875 1047159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:50:57.391601 1047159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:50:57.404397 1047159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:50:57.417362 1047159 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:50:57.420912 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.430378 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:57.542627 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:57.560054 1047159 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831 for IP: 192.168.76.2
	I1208 01:50:57.560086 1047159 certs.go:195] generating shared ca certs ...
	I1208 01:50:57.560102 1047159 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:57.560238 1047159 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:50:57.560289 1047159 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:50:57.560301 1047159 certs.go:257] generating profile certs ...
	I1208 01:50:57.560406 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key
	I1208 01:50:57.560476 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e
	I1208 01:50:57.560521 1047159 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key
	I1208 01:50:57.560641 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:50:57.560677 1047159 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:50:57.560689 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:50:57.560717 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:50:57.560745 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:50:57.560775 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:50:57.560824 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:57.561421 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:50:57.589599 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:50:57.607045 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:50:57.624770 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:50:57.642560 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:50:57.659981 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:50:57.677502 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:50:57.694330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:50:57.711561 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:50:57.728845 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:50:57.746226 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:50:57.763358 1047159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:50:57.775996 1047159 ssh_runner.go:195] Run: openssl version
	I1208 01:50:57.782091 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.789279 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:50:57.796521 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800117 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800178 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.840997 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:50:57.848519 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.855681 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:50:57.863319 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867059 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867155 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.909407 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:50:57.916742 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.924122 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:50:57.931834 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935527 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935597 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.976793 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:50:57.984308 1047159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:50:57.988146 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:50:58.029657 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:50:58.071087 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:50:58.113603 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:50:58.154764 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:50:58.195889 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:50:58.236998 1047159 kubeadm.go:401] StartCluster: {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:58.237105 1047159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:50:58.237204 1047159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:50:58.294166 1047159 cri.go:89] found id: ""
	I1208 01:50:58.294257 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:50:58.315702 1047159 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:50:58.315725 1047159 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:50:58.315777 1047159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:50:58.339201 1047159 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:50:58.339606 1047159 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.339709 1047159 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-389831" cluster setting kubeconfig missing "no-preload-389831" context setting]
	I1208 01:50:58.340000 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.341275 1047159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:50:58.349234 1047159 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:50:58.349268 1047159 kubeadm.go:602] duration metric: took 33.537509ms to restartPrimaryControlPlane
	I1208 01:50:58.349278 1047159 kubeadm.go:403] duration metric: took 112.291494ms to StartCluster
	I1208 01:50:58.349311 1047159 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.349387 1047159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.350038 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.350246 1047159 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:50:58.350553 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:58.350599 1047159 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:50:58.350662 1047159 addons.go:70] Setting storage-provisioner=true in profile "no-preload-389831"
	I1208 01:50:58.350682 1047159 addons.go:239] Setting addon storage-provisioner=true in "no-preload-389831"
	I1208 01:50:58.350707 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.351226 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.351698 1047159 addons.go:70] Setting dashboard=true in profile "no-preload-389831"
	I1208 01:50:58.351722 1047159 addons.go:239] Setting addon dashboard=true in "no-preload-389831"
	W1208 01:50:58.351729 1047159 addons.go:248] addon dashboard should already be in state true
	I1208 01:50:58.351754 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.352178 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.352328 1047159 addons.go:70] Setting default-storageclass=true in profile "no-preload-389831"
	I1208 01:50:58.352356 1047159 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-389831"
	I1208 01:50:58.352612 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.357573 1047159 out.go:179] * Verifying Kubernetes components...
	I1208 01:50:58.360443 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:58.387989 1047159 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:50:58.390885 1047159 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:50:58.393645 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:50:58.393668 1047159 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:50:58.393739 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.397369 1047159 addons.go:239] Setting addon default-storageclass=true in "no-preload-389831"
	I1208 01:50:58.397417 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.397928 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.404763 1047159 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:50:58.407608 1047159 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.407634 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:50:58.407695 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.415506 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.436422 1047159 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.436450 1047159 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:50:58.436511 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.465705 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.488288 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.584861 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:58.593397 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:50:58.593420 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:50:58.599131 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.612450 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:50:58.612475 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:50:58.634836 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.638144 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:50:58.638170 1047159 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:50:58.654765 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:50:58.654790 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:50:58.671149 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:50:58.671176 1047159 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:50:58.710936 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:50:58.710960 1047159 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:50:58.723710 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:50:58.723735 1047159 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:50:58.736057 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:50:58.736083 1047159 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:50:58.751933 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:58.751957 1047159 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:50:58.764645 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.017903 1047159 node_ready.go:35] waiting up to 6m0s for node "no-preload-389831" to be "Ready" ...
	W1208 01:50:59.018334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018392 1047159 retry.go:31] will retry after 331.98119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018470 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018495 1047159 retry.go:31] will retry after 297.347601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018713 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018744 1047159 retry.go:31] will retry after 160.988987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.180394 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.242451 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.242488 1047159 retry.go:31] will retry after 230.038114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.316680 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:59.351165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.388760 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.388804 1047159 retry.go:31] will retry after 306.01786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.414273 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.414313 1047159 retry.go:31] will retry after 473.308455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.473546 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.541312 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.541396 1047159 retry.go:31] will retry after 291.989778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.695757 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:50:59.766490 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.766527 1047159 retry.go:31] will retry after 640.553822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.833774 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.888354 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.905443 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.905489 1047159 retry.go:31] will retry after 440.366836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.953774 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.953806 1047159 retry.go:31] will retry after 703.737178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.346648 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:51:00.408383 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:00.427065 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.427134 1047159 retry.go:31] will retry after 1.874925767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:00.479159 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.479193 1047159 retry.go:31] will retry after 1.068550624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.658132 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:00.718468 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.718503 1047159 retry.go:31] will retry after 623.328533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:01.019492 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:01.343012 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:01.405101 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.405133 1047159 retry.go:31] will retry after 1.498168314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.548991 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:01.616790 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.616868 1047159 retry.go:31] will retry after 1.425241251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.303165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:02.370799 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.370837 1047159 retry.go:31] will retry after 1.658186868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.903558 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:02.966228 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.966264 1047159 retry.go:31] will retry after 1.304687891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.043183 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:03.103290 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.103323 1047159 retry.go:31] will retry after 1.611194242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:03.519134 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:04.029775 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:04.093970 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.094012 1047159 retry.go:31] will retry after 2.255021581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.271404 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:04.369233 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.369266 1047159 retry.go:31] will retry after 3.144995667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.715505 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:04.779555 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.779589 1047159 retry.go:31] will retry after 3.097864658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:05.519459 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:06.350184 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:06.413195 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:06.413231 1047159 retry.go:31] will retry after 2.677656272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.514488 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:07.575743 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.575780 1047159 retry.go:31] will retry after 6.329439159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.878264 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:07.943875 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.943905 1047159 retry.go:31] will retry after 2.415395367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:08.018434 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:09.092104 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:09.156844 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:09.156908 1047159 retry.go:31] will retry after 7.232089792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:10.019592 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:10.359997 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:10.420935 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:10.420968 1047159 retry.go:31] will retry after 8.971701236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:12.518554 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:13.906369 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:13.974204 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:13.974236 1047159 retry.go:31] will retry after 5.63199332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:15.018587 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:16.389784 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:16.456494 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:16.456525 1047159 retry.go:31] will retry after 8.304163321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:17.018908 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.393167 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:19.454509 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.454549 1047159 retry.go:31] will retry after 12.819064934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:19.519223 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.606483 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:19.665334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.665374 1047159 retry.go:31] will retry after 11.853810657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:22.018660 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:24.518475 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:24.760954 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:24.822030 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:24.822063 1047159 retry.go:31] will retry after 19.398232497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:26.519551 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:28.519603 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:31.018950 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:31.519706 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:31.585619 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:31.585652 1047159 retry.go:31] will retry after 9.119457049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.274696 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:32.335795 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.335830 1047159 retry.go:31] will retry after 17.730424932s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:33.519243 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:35.519358 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:38.019740 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:40.518821 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:40.706239 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:40.765447 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:40.765479 1047159 retry.go:31] will retry after 22.170334944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:43.018819 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:44.221342 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:44.285014 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:44.285052 1047159 retry.go:31] will retry after 25.025724204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:45.519041 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:48.018694 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:50.019104 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:50.066395 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:50.138630 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:50.138667 1047159 retry.go:31] will retry after 30.22765222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:52.518557 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:54.518664 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:57.018497 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:59.519498 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:02.018808 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:02.936150 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:03.008626 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:03.008665 1047159 retry.go:31] will retry after 43.423265509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:04.019439 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:06.518568 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:08.518670 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:09.311359 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:09.377364 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:09.377397 1047159 retry.go:31] will retry after 23.787430998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:10.519478 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:13.019449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:15.518771 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:18.018678 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:20.367361 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:52:20.429944 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:20.430047 1047159 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:20.519535 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:23.019133 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:25.019307 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:27.519242 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:30.018749 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:32.019308 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:33.165778 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:33.226192 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:33.226288 1047159 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:34.519469 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:37.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:39.519251 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:42.018723 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:44.019269 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:46.432093 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:46.497680 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:46.497781 1047159 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:52:46.502938 1047159 out.go:179] * Enabled addons: 
	I1208 01:52:46.505774 1047159 addons.go:530] duration metric: took 1m48.155164419s for enable addons: enabled=[]
	W1208 01:52:46.519375 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:49.018487 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:51.019331 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:53.518707 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:55.519582 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:58.019073 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:00.019588 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:02.519532 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:05.023504 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:07.518624 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:09.519024 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:11.519389 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:14.019053 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:16.518622 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:18.519227 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:21.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:23.019558 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:25.519524 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:28.019553 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:30.518668 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:33.018725 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:35.518967 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:37.519455 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:40.018547 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:42.018757 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:44.020627 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:46.518584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:49.018526 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:51.018609 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:53.518615 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:56.018528 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:58.018763 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:00.519130 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:03.018606 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:05.518787 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:08.019519 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:10.518446 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:12.518576 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:14.519449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:17.018671 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:19.518636 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:22.018525 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:24.519178 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:27.018533 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:29.518640 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:31.519126 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:33.519284 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:35.519452 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:38.018667 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:40.518930 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:42.519454 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:45.018619 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:47.019380 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:49.518741 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:51.518789 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:54.018519 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:56.518491 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:58.519112 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:01.018584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:03.018643 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:05.518505 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:08.018536 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:10.019472 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:12.518502 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:14.518620 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:17.019504 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:19.518721 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:22.018570 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:24.518558 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:27.018585 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:29.018662 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:31.518652 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:34.018502 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:36.018610 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:38.518693 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:40.518763 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:43.018606 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:45.018797 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:47.019374 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:49.518685 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:51.519142 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:54.018934 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:56.518500 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:58.518701 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:00.518894 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:02.519392 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:05.018526 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:07.018584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:09.018734 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:11.518579 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:13.519381 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:16.019530 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:18.518601 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:20.518909 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:23.019357 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:25.519360 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:28.019395 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:30.518961 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:32.519312 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:34.519397 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:37.018922 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:39.518405 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:42.018560 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:44.518581 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:47.019431 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:49.518464 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:51.519391 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:54.018689 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:56.518414 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:58.518521 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:59.018412 1047159 node_ready.go:38] duration metric: took 6m0.000405007s for node "no-preload-389831" to be "Ready" ...
	I1208 01:56:59.026905 1047159 out.go:203] 
	W1208 01:56:59.029838 1047159 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:56:59.029857 1047159 out.go:285] * 
	* 
	W1208 01:56:59.032175 1047159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:56:59.035425 1047159 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1047287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:50:50.554953574Z",
	            "FinishedAt": "2025-12-08T01:50:49.214340581Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eaeeec708b96ab10f53f5e7226e115539fe166bf63ca544042e974e7018b260",
	            "SandboxKey": "/var/run/docker/netns/6eaeeec708b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:00:7d:ce:0b:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "795d8a30b86237e9ff6e670d6bc504ea3f9738fbb154a7d1d8e6085bd1fb8cce",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 2 (321.777201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-389831 logs -n 25: (1.093631477s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:54 UTC │                     │
	│ stop    │ -p newest-cni-448023 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p newest-cni-448023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:56:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:56:40.995814 1055021 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:56:40.995993 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996024 1055021 out.go:374] Setting ErrFile to fd 2...
	I1208 01:56:40.996044 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996297 1055021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:56:40.996698 1055021 out.go:368] Setting JSON to false
	I1208 01:56:40.997651 1055021 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23933,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:56:40.997760 1055021 start.go:143] virtualization:  
	I1208 01:56:41.000930 1055021 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:56:41.005767 1055021 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:56:41.005958 1055021 notify.go:221] Checking for updates...
	I1208 01:56:41.009547 1055021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:56:41.012698 1055021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:41.016029 1055021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:56:41.019114 1055021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:56:41.022081 1055021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:56:41.025425 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:41.026092 1055021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:56:41.062956 1055021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:56:41.063137 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.133740 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.124579493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.133841 1055021 docker.go:319] overlay module found
	I1208 01:56:41.136922 1055021 out.go:179] * Using the docker driver based on existing profile
	I1208 01:56:41.139812 1055021 start.go:309] selected driver: docker
	I1208 01:56:41.139836 1055021 start.go:927] validating driver "docker" against &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.139955 1055021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:56:41.140671 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.193763 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.183682659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.194162 1055021 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:56:41.194196 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:41.194260 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:41.194313 1055021 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.197698 1055021 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:56:41.200489 1055021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:56:41.203470 1055021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:56:41.206341 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:41.206393 1055021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:56:41.206406 1055021 cache.go:65] Caching tarball of preloaded images
	I1208 01:56:41.206414 1055021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:56:41.206514 1055021 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:56:41.206524 1055021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:56:41.206659 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.226393 1055021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:56:41.226417 1055021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:56:41.226437 1055021 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:56:41.226470 1055021 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:56:41.226539 1055021 start.go:364] duration metric: took 45.818µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:56:41.226562 1055021 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:56:41.226569 1055021 fix.go:54] fixHost starting: 
	I1208 01:56:41.226872 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.244524 1055021 fix.go:112] recreateIfNeeded on newest-cni-448023: state=Stopped err=<nil>
	W1208 01:56:41.244564 1055021 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:56:42.018560 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:44.518581 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:41.247746 1055021 out.go:252] * Restarting existing docker container for "newest-cni-448023" ...
	I1208 01:56:41.247847 1055021 cli_runner.go:164] Run: docker start newest-cni-448023
	I1208 01:56:41.505835 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.523362 1055021 kic.go:430] container "newest-cni-448023" state is running.
	I1208 01:56:41.523773 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:41.545536 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.545777 1055021 machine.go:94] provisionDockerMachine start ...
	I1208 01:56:41.545848 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:41.570998 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:41.571328 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:41.571336 1055021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:56:41.572041 1055021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:56:44.722629 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.722658 1055021 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:56:44.722733 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.743562 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.743889 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.743906 1055021 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:56:44.912657 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.912755 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.930550 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.930902 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.930926 1055021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:56:45.125086 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:56:45.125166 1055021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:56:45.125215 1055021 ubuntu.go:190] setting up certificates
	I1208 01:56:45.125242 1055021 provision.go:84] configureAuth start
	I1208 01:56:45.125340 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:45.146934 1055021 provision.go:143] copyHostCerts
	I1208 01:56:45.147071 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:56:45.147086 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:56:45.147185 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:56:45.147315 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:56:45.147333 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:56:45.147379 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:56:45.147450 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:56:45.147463 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:56:45.147494 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:56:45.147561 1055021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:56:45.319641 1055021 provision.go:177] copyRemoteCerts
	I1208 01:56:45.319718 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:56:45.319771 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.338151 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.446957 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:56:45.464534 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:56:45.481634 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:56:45.499110 1055021 provision.go:87] duration metric: took 373.83191ms to configureAuth
	I1208 01:56:45.499137 1055021 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:56:45.499354 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:45.499462 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.519312 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:45.520323 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:45.520348 1055021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:56:45.838649 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:56:45.838675 1055021 machine.go:97] duration metric: took 4.292880237s to provisionDockerMachine
	I1208 01:56:45.838688 1055021 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:56:45.838701 1055021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:56:45.838764 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:56:45.838808 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.856107 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.962864 1055021 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:56:45.966280 1055021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:56:45.966310 1055021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:56:45.966321 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:56:45.966376 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:56:45.966455 1055021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:56:45.966565 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:56:45.973812 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:45.990960 1055021 start.go:296] duration metric: took 152.256258ms for postStartSetup
	I1208 01:56:45.991062 1055021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:56:45.991102 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.010295 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.111994 1055021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:56:46.116921 1055021 fix.go:56] duration metric: took 4.890342951s for fixHost
	I1208 01:56:46.116949 1055021 start.go:83] releasing machines lock for "newest-cni-448023", held for 4.89039814s
	I1208 01:56:46.117023 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:46.133998 1055021 ssh_runner.go:195] Run: cat /version.json
	I1208 01:56:46.134053 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.134086 1055021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:56:46.134143 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.155007 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.157578 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.259943 1055021 ssh_runner.go:195] Run: systemctl --version
	I1208 01:56:46.363782 1055021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:56:46.401418 1055021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:56:46.405895 1055021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:56:46.406027 1055021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:56:46.414120 1055021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:56:46.414145 1055021 start.go:496] detecting cgroup driver to use...
	I1208 01:56:46.414178 1055021 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:56:46.414240 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:56:46.430116 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:56:46.443306 1055021 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:56:46.443370 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:56:46.459228 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:56:46.472250 1055021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:56:46.583643 1055021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:56:46.702836 1055021 docker.go:234] disabling docker service ...
	I1208 01:56:46.702974 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:56:46.718081 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:56:46.731165 1055021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:56:46.841278 1055021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:56:46.959396 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:56:46.972986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:56:46.988672 1055021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:56:46.988773 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:46.998541 1055021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:56:46.998635 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.012333 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.022719 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.033036 1055021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:56:47.042410 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.053356 1055021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.066055 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.076106 1055021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:56:47.083610 1055021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:56:47.090937 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.204760 1055021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:56:47.377268 1055021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:56:47.377383 1055021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:56:47.381048 1055021 start.go:564] Will wait 60s for crictl version
	I1208 01:56:47.381161 1055021 ssh_runner.go:195] Run: which crictl
	I1208 01:56:47.384529 1055021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:56:47.407415 1055021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:56:47.407590 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.438310 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.480028 1055021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:56:47.482931 1055021 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:56:47.498300 1055021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:56:47.502114 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.515024 1055021 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:56:47.517850 1055021 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:56:47.518007 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:47.518083 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.554783 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.554810 1055021 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:56:47.554891 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.580370 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.580396 1055021 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:56:47.580404 1055021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:56:47.580497 1055021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:56:47.580581 1055021 ssh_runner.go:195] Run: crio config
	I1208 01:56:47.630652 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:47.630677 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:47.630697 1055021 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:56:47.630720 1055021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:56:47.630943 1055021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:56:47.631027 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:56:47.638867 1055021 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:56:47.638960 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:56:47.646535 1055021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:56:47.659466 1055021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:56:47.672488 1055021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:56:47.685612 1055021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:56:47.689373 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.699289 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.852921 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:47.877101 1055021 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:56:47.877130 1055021 certs.go:195] generating shared ca certs ...
	I1208 01:56:47.877147 1055021 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:47.877305 1055021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:56:47.877358 1055021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:56:47.877370 1055021 certs.go:257] generating profile certs ...
	I1208 01:56:47.877482 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:56:47.877551 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:56:47.877603 1055021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:56:47.877731 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:56:47.877771 1055021 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:56:47.877792 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:56:47.877831 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:56:47.877859 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:56:47.877890 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:56:47.877943 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:47.879217 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:56:47.903514 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:56:47.922072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:56:47.939555 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:56:47.956891 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:56:47.976072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:56:47.994485 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:56:48.016256 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:56:48.036003 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:56:48.058425 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:56:48.078107 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:56:48.096426 1055021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:56:48.110183 1055021 ssh_runner.go:195] Run: openssl version
	I1208 01:56:48.117292 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.125194 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:56:48.133030 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136789 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136880 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.178238 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:56:48.186394 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.194429 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:56:48.203481 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207582 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207655 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.249053 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:56:48.257115 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.265010 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:56:48.272913 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276751 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276818 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.318199 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:56:48.326277 1055021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:56:48.330322 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:56:48.371576 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:56:48.412414 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:56:48.454546 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:56:48.499800 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:56:48.544265 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:56:48.590374 1055021 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:48.590473 1055021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:56:48.590547 1055021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:56:48.619202 1055021 cri.go:89] found id: ""
	I1208 01:56:48.619330 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:56:48.627096 1055021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:56:48.627120 1055021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:56:48.627172 1055021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:56:48.634458 1055021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:56:48.635058 1055021 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.635319 1055021 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-448023" cluster setting kubeconfig missing "newest-cni-448023" context setting]
	I1208 01:56:48.635800 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.637157 1055021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:56:48.644838 1055021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:56:48.644913 1055021 kubeadm.go:602] duration metric: took 17.785882ms to restartPrimaryControlPlane
	I1208 01:56:48.644930 1055021 kubeadm.go:403] duration metric: took 54.567759ms to StartCluster
	I1208 01:56:48.644947 1055021 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.645007 1055021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.645870 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.646084 1055021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:56:48.646389 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:48.646439 1055021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:56:48.646504 1055021 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-448023"
	I1208 01:56:48.646529 1055021 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-448023"
	I1208 01:56:48.646555 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647285 1055021 addons.go:70] Setting dashboard=true in profile "newest-cni-448023"
	I1208 01:56:48.647305 1055021 addons.go:239] Setting addon dashboard=true in "newest-cni-448023"
	W1208 01:56:48.647311 1055021 addons.go:248] addon dashboard should already be in state true
	I1208 01:56:48.647331 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.647957 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.648448 1055021 addons.go:70] Setting default-storageclass=true in profile "newest-cni-448023"
	I1208 01:56:48.648476 1055021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-448023"
	I1208 01:56:48.648734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.651945 1055021 out.go:179] * Verifying Kubernetes components...
	I1208 01:56:48.654867 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:48.684864 1055021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:56:48.691009 1055021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:56:48.694226 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:56:48.694251 1055021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:56:48.694323 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.695436 1055021 addons.go:239] Setting addon default-storageclass=true in "newest-cni-448023"
	I1208 01:56:48.695482 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.695884 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.701699 1055021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1208 01:56:47.019431 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:49.518464 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:48.704558 1055021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.704591 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:56:48.704655 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.736846 1055021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.736869 1055021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:56:48.736936 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.742543 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.766983 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.785430 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.885046 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:48.955470 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:56:48.955498 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:56:48.963459 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.965887 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.978338 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:56:48.978366 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:56:49.016188 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:56:49.016210 1055021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:56:49.061303 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:56:49.061328 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:56:49.074921 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:56:49.074987 1055021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:56:49.087412 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:56:49.087487 1055021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:56:49.099641 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:56:49.099667 1055021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:56:49.112487 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:56:49.112550 1055021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:56:49.125264 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.125288 1055021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:56:49.138335 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.508759 1055021 api_server.go:52] waiting for apiserver process to appear ...
	W1208 01:56:49.508918 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509385 1055021 retry.go:31] will retry after 199.05184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509006 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509406 1055021 retry.go:31] will retry after 322.784094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509263 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509418 1055021 retry.go:31] will retry after 353.691521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509538 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:49.709327 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:49.771304 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.771383 1055021 retry.go:31] will retry after 463.845922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.832454 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:49.863948 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:49.893225 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.893260 1055021 retry.go:31] will retry after 412.627767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.933504 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.933538 1055021 retry.go:31] will retry after 461.252989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.009945 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.235907 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:50.306466 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:50.322038 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.322071 1055021 retry.go:31] will retry after 523.830022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:50.380008 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.380051 1055021 retry.go:31] will retry after 753.154513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.395255 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:50.456642 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.456676 1055021 retry.go:31] will retry after 803.433098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.509737 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.846838 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:50.908365 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.908408 1055021 retry.go:31] will retry after 671.521026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.519391 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:54.018689 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:51.009996 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.134042 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.192423 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.192455 1055021 retry.go:31] will retry after 689.227768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.260665 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:51.319134 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.319182 1055021 retry.go:31] will retry after 541.526321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.509442 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.580384 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:51.640452 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.640485 1055021 retry.go:31] will retry after 844.977075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.861863 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:51.882351 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.944280 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.944321 1055021 retry.go:31] will retry after 1.000499188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.967122 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.967155 1055021 retry.go:31] will retry after 859.890122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.010305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:52.486447 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:52.510056 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:56:52.585753 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.585816 1055021 retry.go:31] will retry after 1.004705222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.828167 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:52.886091 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.886122 1055021 retry.go:31] will retry after 2.82316744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.945292 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:53.006627 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.006710 1055021 retry.go:31] will retry after 2.04955933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.009824 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.510073 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.591501 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:53.650678 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.650712 1055021 retry.go:31] will retry after 3.502569911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:54.010159 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:54.509667 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.009590 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.057336 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:55.132269 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.132307 1055021 retry.go:31] will retry after 2.513983979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.509439 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.710171 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:55.769058 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.769091 1055021 retry.go:31] will retry after 2.669645777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:56.518414 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:58.518521 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:59.018412 1047159 node_ready.go:38] duration metric: took 6m0.000405007s for node "no-preload-389831" to be "Ready" ...
	I1208 01:56:59.026905 1047159 out.go:203] 
	W1208 01:56:59.029838 1047159 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:56:59.029857 1047159 out.go:285] * 
	W1208 01:56:59.032175 1047159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:56:59.035425 1047159 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072876779Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072883992Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072889867Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072896496Z" level=info msg="RDT not available in the host system"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072909567Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073778565Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073798208Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073814225Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074485379Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074501871Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074630463Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.075394984Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07576115Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07584312Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123803822Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123963487Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124019635Z" level=info msg="Create NRI interface"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124120732Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124136092Z" level=info msg="runtime interface created"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124147144Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124154217Z" level=info msg="runtime interface starting up..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124160937Z" level=info msg="starting plugins..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124173171Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124229549Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:50:57 no-preload-389831 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:00.472510    4002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.473523    4002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.475461    4002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.476122    4002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.477935    4002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:57:00 up  6:39,  0 user,  load average: 0.25, 0.56, 1.25
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:56:58 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:58 no-preload-389831 kubelet[3885]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:58 no-preload-389831 kubelet[3885]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:58 no-preload-389831 kubelet[3885]: E1208 01:56:58.303657    3885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:58 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:58 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:59 no-preload-389831 kubelet[3890]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:59 no-preload-389831 kubelet[3890]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:59 no-preload-389831 kubelet[3890]: E1208 01:56:59.099707    3890 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:59 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:59 no-preload-389831 kubelet[3917]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:59 no-preload-389831 kubelet[3917]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:59 no-preload-389831 kubelet[3917]: E1208 01:56:59.811024    3917 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:59 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:57:00 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 08 01:57:00 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:57:00 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 2 (335.147431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1208 01:54:52.721676  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:55:20.422892  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:55:34.380282  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:55:45.329514  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.768710933s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-448023
helpers_test.go:243: (dbg) docker inspect newest-cni-448023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	        "Created": "2025-12-08T01:46:34.353152924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1040368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:46:34.40860903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hosts",
	        "LogPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9-json.log",
	        "Name": "/newest-cni-448023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-448023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-448023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	                "LowerDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-448023",
	                "Source": "/var/lib/docker/volumes/newest-cni-448023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-448023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-448023",
	                "name.minikube.sigs.k8s.io": "newest-cni-448023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54e2b1bd2134d16d4b7d139055c4702411c741fdf7b640d1372180a746c06a18",
	            "SandboxKey": "/var/run/docker/netns/54e2b1bd2134",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-448023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:88:2b:75:de:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec5af7f0fdbc70a95f83d97d8a04145286c7acd7e864f0f850cd22983b469ab7",
	                    "EndpointID": "3442b38b17971707b26d88f3f2afa853925f6fb22e828e9bc3241996d1d592b4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-448023",
	                        "ff1a1ad3010f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 6 (353.171751ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:56:38.145480 1054490 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:40 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-172173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │                     │
	│ stop    │ -p embed-certs-172173 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:50:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:50:50.286498 1047159 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:50:50.286662 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.286699 1047159 out.go:374] Setting ErrFile to fd 2...
	I1208 01:50:50.286711 1047159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:50:50.287030 1047159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:50:50.287411 1047159 out.go:368] Setting JSON to false
	I1208 01:50:50.288307 1047159 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23583,"bootTime":1765135068,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:50:50.288377 1047159 start.go:143] virtualization:  
	I1208 01:50:50.291362 1047159 out.go:179] * [no-preload-389831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:50:50.295297 1047159 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:50:50.295435 1047159 notify.go:221] Checking for updates...
	I1208 01:50:50.301190 1047159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:50:50.304190 1047159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:50.307152 1047159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:50:50.310056 1047159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:50:50.312896 1047159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:50:50.316257 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:50.316883 1047159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:50:50.344630 1047159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:50:50.344748 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.404071 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.394347428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.404175 1047159 docker.go:319] overlay module found
	I1208 01:50:50.409385 1047159 out.go:179] * Using the docker driver based on existing profile
	I1208 01:50:50.412316 1047159 start.go:309] selected driver: docker
	I1208 01:50:50.412334 1047159 start.go:927] validating driver "docker" against &{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.412446 1047159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:50:50.413148 1047159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:50:50.469330 1047159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:50:50.460395311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:50:50.469668 1047159 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:50:50.469703 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:50.469766 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:50.469803 1047159 start.go:353] cluster config:
	{Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:50.473111 1047159 out.go:179] * Starting "no-preload-389831" primary control-plane node in "no-preload-389831" cluster
	I1208 01:50:50.475845 1047159 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:50:50.478645 1047159 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:50:50.481298 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:50.481363 1047159 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:50:50.481427 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.481703 1047159 cache.go:107] acquiring lock: {Name:mkb488f77623cf5688783098c8af8f37e2ccf2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481784 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:50:50.481800 1047159 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.113µs
	I1208 01:50:50.481812 1047159 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:50:50.481825 1047159 cache.go:107] acquiring lock: {Name:mk46c5b5a799bb57ec4fc052703439a88454d6c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481854 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:50:50.481859 1047159 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.513µs
	I1208 01:50:50.481865 1047159 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481874 1047159 cache.go:107] acquiring lock: {Name:mkd948fd592ac79c85c21b030b5344321f29366e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481904 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:50:50.481909 1047159 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 35.783µs
	I1208 01:50:50.481915 1047159 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481925 1047159 cache.go:107] acquiring lock: {Name:mk937612bf3f3168a18ddaac7a61a8bae665cda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481950 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:50:50.481956 1047159 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.206µs
	I1208 01:50:50.481962 1047159 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:50:50.481970 1047159 cache.go:107] acquiring lock: {Name:mk12ceb359422aeb489a7c1f33a7ec5ed809694f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.481994 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:50:50.481999 1047159 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.179µs
	I1208 01:50:50.482005 1047159 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:50:50.482018 1047159 cache.go:107] acquiring lock: {Name:mk26da6a2fb489baaddcecf1a83cf045eefe1b48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482042 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:50:50.482047 1047159 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.37µs
	I1208 01:50:50.482052 1047159 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:50:50.482061 1047159 cache.go:107] acquiring lock: {Name:mk855f3a105742255ca91bc6cacb964e2740cdc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482085 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:50:50.482090 1047159 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 30.138µs
	I1208 01:50:50.482095 1047159 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:50:50.482104 1047159 cache.go:107] acquiring lock: {Name:mk695dd8e1a707c0142f2b3898e789d03306fcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.482128 1047159 cache.go:115] /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:50:50.482132 1047159 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.187µs
	I1208 01:50:50.482138 1047159 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:50:50.482143 1047159 cache.go:87] Successfully saved all images to host disk.
	I1208 01:50:50.501174 1047159 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:50:50.501198 1047159 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:50:50.501214 1047159 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:50:50.501245 1047159 start.go:360] acquireMachinesLock for no-preload-389831: {Name:mkc005fe96402610ac376caa09ffa5218e546ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:50:50.501307 1047159 start.go:364] duration metric: took 39.935µs to acquireMachinesLock for "no-preload-389831"
	I1208 01:50:50.501330 1047159 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:50:50.501339 1047159 fix.go:54] fixHost starting: 
	I1208 01:50:50.501613 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.517984 1047159 fix.go:112] recreateIfNeeded on no-preload-389831: state=Stopped err=<nil>
	W1208 01:50:50.518022 1047159 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:50:50.521355 1047159 out.go:252] * Restarting existing docker container for "no-preload-389831" ...
	I1208 01:50:50.521454 1047159 cli_runner.go:164] Run: docker start no-preload-389831
	I1208 01:50:50.809627 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:50.833454 1047159 kic.go:430] container "no-preload-389831" state is running.
	I1208 01:50:50.833842 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:50.859105 1047159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/config.json ...
	I1208 01:50:50.859482 1047159 machine.go:94] provisionDockerMachine start ...
	I1208 01:50:50.859658 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:50.883035 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:50.883401 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:50.883410 1047159 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:50:50.884458 1047159 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37578->127.0.0.1:33812: read: connection reset by peer
	I1208 01:50:54.042538 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.042564 1047159 ubuntu.go:182] provisioning hostname "no-preload-389831"
	I1208 01:50:54.042629 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.060212 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.060523 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.060540 1047159 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-389831 && echo "no-preload-389831" | sudo tee /etc/hostname
	I1208 01:50:54.224761 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-389831
	
	I1208 01:50:54.224878 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:54.243516 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:54.243871 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:54.243894 1047159 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-389831' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-389831/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-389831' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:50:54.395341 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:50:54.395369 1047159 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:50:54.395395 1047159 ubuntu.go:190] setting up certificates
	I1208 01:50:54.395406 1047159 provision.go:84] configureAuth start
	I1208 01:50:54.395468 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:54.414388 1047159 provision.go:143] copyHostCerts
	I1208 01:50:54.414467 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:50:54.414483 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:50:54.414563 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:50:54.414673 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:50:54.414678 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:50:54.414706 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:50:54.414764 1047159 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:50:54.414768 1047159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:50:54.414791 1047159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:50:54.414925 1047159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.no-preload-389831 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-389831]
	I1208 01:50:55.069511 1047159 provision.go:177] copyRemoteCerts
	I1208 01:50:55.069604 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:50:55.069660 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.089775 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.199796 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:50:55.220330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:50:55.238828 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:50:55.258151 1047159 provision.go:87] duration metric: took 862.724063ms to configureAuth
	I1208 01:50:55.258179 1047159 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:50:55.258429 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:55.258562 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.279199 1047159 main.go:143] libmachine: Using SSH client type: native
	I1208 01:50:55.279708 1047159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33812 <nil> <nil>}
	I1208 01:50:55.279744 1047159 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:50:55.579255 1047159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:50:55.579319 1047159 machine.go:97] duration metric: took 4.719823255s to provisionDockerMachine
	I1208 01:50:55.579345 1047159 start.go:293] postStartSetup for "no-preload-389831" (driver="docker")
	I1208 01:50:55.579373 1047159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:50:55.579468 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:50:55.579542 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.598239 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.702980 1047159 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:50:55.706389 1047159 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:50:55.706419 1047159 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:50:55.706430 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:50:55.706488 1047159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:50:55.706577 1047159 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:50:55.706694 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:50:55.715414 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:55.732970 1047159 start.go:296] duration metric: took 153.595815ms for postStartSetup
	I1208 01:50:55.733056 1047159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:50:55.733110 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.750836 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.851895 1047159 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:50:55.856695 1047159 fix.go:56] duration metric: took 5.355347948s for fixHost
	I1208 01:50:55.856722 1047159 start.go:83] releasing machines lock for "no-preload-389831", held for 5.355403564s
	I1208 01:50:55.856804 1047159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-389831
	I1208 01:50:55.873802 1047159 ssh_runner.go:195] Run: cat /version.json
	I1208 01:50:55.873860 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.874134 1047159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:50:55.874190 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:55.891794 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:55.904440 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:56.005327 1047159 ssh_runner.go:195] Run: systemctl --version
	I1208 01:50:56.106979 1047159 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:50:56.144384 1047159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:50:56.149115 1047159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:50:56.149201 1047159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:50:56.157950 1047159 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:50:56.157977 1047159 start.go:496] detecting cgroup driver to use...
	I1208 01:50:56.158056 1047159 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:50:56.158131 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:50:56.173988 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:50:56.188154 1047159 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:50:56.188221 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:50:56.204007 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:50:56.217383 1047159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:50:56.340458 1047159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:50:56.458244 1047159 docker.go:234] disabling docker service ...
	I1208 01:50:56.458372 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:50:56.474961 1047159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:50:56.487941 1047159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:50:56.612532 1047159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:50:56.731416 1047159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:50:56.744122 1047159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:50:56.762363 1047159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:50:56.762429 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.772958 1047159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:50:56.773032 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.782289 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.793260 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.807215 1047159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:50:56.816828 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.826522 1047159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.835623 1047159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:50:56.845020 1047159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:50:56.852794 1047159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:50:56.860249 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:56.972942 1047159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:50:57.131014 1047159 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:50:57.131096 1047159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:50:57.134814 1047159 start.go:564] Will wait 60s for crictl version
	I1208 01:50:57.134930 1047159 ssh_runner.go:195] Run: which crictl
	I1208 01:50:57.138347 1047159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:50:57.164245 1047159 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:50:57.164384 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.192737 1047159 ssh_runner.go:195] Run: crio --version
	I1208 01:50:57.223842 1047159 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:50:57.226769 1047159 cli_runner.go:164] Run: docker network inspect no-preload-389831 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:50:57.243362 1047159 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:50:57.247217 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.257235 1047159 kubeadm.go:884] updating cluster {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:50:57.257353 1047159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:50:57.257396 1047159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:50:57.289126 1047159 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:50:57.289152 1047159 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:50:57.289160 1047159 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:50:57.289257 1047159 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-389831 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:50:57.289336 1047159 ssh_runner.go:195] Run: crio config
	I1208 01:50:57.362376 1047159 cni.go:84] Creating CNI manager for ""
	I1208 01:50:57.362445 1047159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:50:57.362479 1047159 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:50:57.362529 1047159 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-389831 NodeName:no-preload-389831 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:50:57.362701 1047159 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-389831"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:50:57.362790 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:50:57.370735 1047159 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:50:57.370804 1047159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:50:57.378875 1047159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:50:57.391601 1047159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:50:57.404397 1047159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1208 01:50:57.417362 1047159 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:50:57.420912 1047159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:50:57.430378 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:57.542627 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:57.560054 1047159 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831 for IP: 192.168.76.2
	I1208 01:50:57.560086 1047159 certs.go:195] generating shared ca certs ...
	I1208 01:50:57.560102 1047159 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:57.560238 1047159 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:50:57.560289 1047159 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:50:57.560301 1047159 certs.go:257] generating profile certs ...
	I1208 01:50:57.560406 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.key
	I1208 01:50:57.560476 1047159 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key.2f54046e
	I1208 01:50:57.560521 1047159 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key
	I1208 01:50:57.560641 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:50:57.560677 1047159 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:50:57.560689 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:50:57.560717 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:50:57.560745 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:50:57.560775 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:50:57.560824 1047159 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:50:57.561421 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:50:57.589599 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:50:57.607045 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:50:57.624770 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:50:57.642560 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:50:57.659981 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:50:57.677502 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:50:57.694330 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:50:57.711561 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:50:57.728845 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:50:57.746226 1047159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:50:57.763358 1047159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:50:57.775996 1047159 ssh_runner.go:195] Run: openssl version
	I1208 01:50:57.782091 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.789279 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:50:57.796521 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800117 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.800178 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:50:57.840997 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:50:57.848519 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.855681 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:50:57.863319 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867059 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.867155 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:50:57.909407 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:50:57.916742 1047159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.924122 1047159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:50:57.931834 1047159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935527 1047159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.935597 1047159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:50:57.976793 1047159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:50:57.984308 1047159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:50:57.988146 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:50:58.029657 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:50:58.071087 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:50:58.113603 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:50:58.154764 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:50:58.195889 1047159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:50:58.236998 1047159 kubeadm.go:401] StartCluster: {Name:no-preload-389831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-389831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:50:58.237105 1047159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:50:58.237204 1047159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:50:58.294166 1047159 cri.go:89] found id: ""
	I1208 01:50:58.294257 1047159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:50:58.315702 1047159 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:50:58.315725 1047159 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:50:58.315777 1047159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:50:58.339201 1047159 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:50:58.339606 1047159 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-389831" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.339709 1047159 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-389831" cluster setting kubeconfig missing "no-preload-389831" context setting]
	I1208 01:50:58.340000 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.341275 1047159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:50:58.349234 1047159 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:50:58.349268 1047159 kubeadm.go:602] duration metric: took 33.537509ms to restartPrimaryControlPlane
	I1208 01:50:58.349278 1047159 kubeadm.go:403] duration metric: took 112.291494ms to StartCluster
	I1208 01:50:58.349311 1047159 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.349387 1047159 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:50:58.350038 1047159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:50:58.350246 1047159 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:50:58.350553 1047159 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:50:58.350599 1047159 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:50:58.350662 1047159 addons.go:70] Setting storage-provisioner=true in profile "no-preload-389831"
	I1208 01:50:58.350682 1047159 addons.go:239] Setting addon storage-provisioner=true in "no-preload-389831"
	I1208 01:50:58.350707 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.351226 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.351698 1047159 addons.go:70] Setting dashboard=true in profile "no-preload-389831"
	I1208 01:50:58.351722 1047159 addons.go:239] Setting addon dashboard=true in "no-preload-389831"
	W1208 01:50:58.351729 1047159 addons.go:248] addon dashboard should already be in state true
	I1208 01:50:58.351754 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.352178 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.352328 1047159 addons.go:70] Setting default-storageclass=true in profile "no-preload-389831"
	I1208 01:50:58.352356 1047159 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-389831"
	I1208 01:50:58.352612 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.357573 1047159 out.go:179] * Verifying Kubernetes components...
	I1208 01:50:58.360443 1047159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:50:58.387989 1047159 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:50:58.390885 1047159 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:50:58.393645 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:50:58.393668 1047159 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:50:58.393739 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.397369 1047159 addons.go:239] Setting addon default-storageclass=true in "no-preload-389831"
	I1208 01:50:58.397417 1047159 host.go:66] Checking if "no-preload-389831" exists ...
	I1208 01:50:58.397928 1047159 cli_runner.go:164] Run: docker container inspect no-preload-389831 --format={{.State.Status}}
	I1208 01:50:58.404763 1047159 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:50:58.407608 1047159 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.407634 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:50:58.407695 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.415506 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.436422 1047159 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.436450 1047159 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:50:58.436511 1047159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-389831
	I1208 01:50:58.465705 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.488288 1047159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/no-preload-389831/id_rsa Username:docker}
	I1208 01:50:58.584861 1047159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:50:58.593397 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:50:58.593420 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:50:58.599131 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:50:58.612450 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:50:58.612475 1047159 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:50:58.634836 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:58.638144 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:50:58.638170 1047159 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:50:58.654765 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:50:58.654790 1047159 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:50:58.671149 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:50:58.671176 1047159 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:50:58.710936 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:50:58.710960 1047159 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:50:58.723710 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:50:58.723735 1047159 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:50:58.736057 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:50:58.736083 1047159 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:50:58.751933 1047159 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:58.751957 1047159 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:50:58.764645 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.017903 1047159 node_ready.go:35] waiting up to 6m0s for node "no-preload-389831" to be "Ready" ...
	W1208 01:50:59.018334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018392 1047159 retry.go:31] will retry after 331.98119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018470 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018495 1047159 retry.go:31] will retry after 297.347601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.018713 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.018744 1047159 retry.go:31] will retry after 160.988987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.180394 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.242451 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.242488 1047159 retry.go:31] will retry after 230.038114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.316680 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:50:59.351165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.388760 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.388804 1047159 retry.go:31] will retry after 306.01786ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.414273 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.414313 1047159 retry.go:31] will retry after 473.308455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.473546 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:50:59.541312 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.541396 1047159 retry.go:31] will retry after 291.989778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.695757 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:50:59.766490 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.766527 1047159 retry.go:31] will retry after 640.553822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.833774 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:50:59.888354 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:50:59.905443 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.905489 1047159 retry.go:31] will retry after 440.366836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:50:59.953774 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:50:59.953806 1047159 retry.go:31] will retry after 703.737178ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.346648 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:51:00.408383 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:00.427065 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.427134 1047159 retry.go:31] will retry after 1.874925767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:00.479159 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.479193 1047159 retry.go:31] will retry after 1.068550624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.658132 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:00.718468 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:00.718503 1047159 retry.go:31] will retry after 623.328533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:01.019492 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:01.343012 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:01.405101 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.405133 1047159 retry.go:31] will retry after 1.498168314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.548991 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:01.616790 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:01.616868 1047159 retry.go:31] will retry after 1.425241251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.303165 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:02.370799 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.370837 1047159 retry.go:31] will retry after 1.658186868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.903558 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:02.966228 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:02.966264 1047159 retry.go:31] will retry after 1.304687891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.043183 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:03.103290 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:03.103323 1047159 retry.go:31] will retry after 1.611194242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:03.519134 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:04.029775 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:04.093970 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.094012 1047159 retry.go:31] will retry after 2.255021581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.271404 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:04.369233 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.369266 1047159 retry.go:31] will retry after 3.144995667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.715505 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:04.779555 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:04.779589 1047159 retry.go:31] will retry after 3.097864658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:05.519459 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:06.350184 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:06.413195 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:06.413231 1047159 retry.go:31] will retry after 2.677656272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.514488 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:07.575743 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.575780 1047159 retry.go:31] will retry after 6.329439159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.878264 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:07.943875 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:07.943905 1047159 retry.go:31] will retry after 2.415395367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:08.018434 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:09.092104 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:09.156844 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:09.156908 1047159 retry.go:31] will retry after 7.232089792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:10.019592 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:10.359997 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:10.420935 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:10.420968 1047159 retry.go:31] will retry after 8.971701236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:12.518554 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:13.906369 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:13.974204 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:13.974236 1047159 retry.go:31] will retry after 5.63199332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:15.018587 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:16.389784 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:16.456494 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:16.456525 1047159 retry.go:31] will retry after 8.304163321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:17.018908 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.393167 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:19.454509 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.454549 1047159 retry.go:31] will retry after 12.819064934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:19.519223 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:19.606483 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:19.665334 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:19.665374 1047159 retry.go:31] will retry after 11.853810657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:22.018660 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:24.518475 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:24.760954 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:24.822030 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:24.822063 1047159 retry.go:31] will retry after 19.398232497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:26.519551 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:28.519603 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:31.018950 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:31.519706 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:31.585619 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:31.585652 1047159 retry.go:31] will retry after 9.119457049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.274696 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:32.335795 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:32.335830 1047159 retry.go:31] will retry after 17.730424932s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:33.519243 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:35.519358 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:38.019740 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:40.518821 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:40.706239 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:51:40.765447 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:40.765479 1047159 retry.go:31] will retry after 22.170334944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:43.018819 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:44.221342 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:51:44.285014 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:44.285052 1047159 retry.go:31] will retry after 25.025724204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:45.519041 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:48.018694 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:50.019104 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:51:50.066395 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:51:50.138630 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:51:50.138667 1047159 retry.go:31] will retry after 30.22765222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:51:52.518557 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:54.518664 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:57.018497 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:51:59.519498 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:02.018808 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:02.936150 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:03.008626 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:03.008665 1047159 retry.go:31] will retry after 43.423265509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:04.019439 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:06.518568 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:08.518670 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:09.311359 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:09.377364 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:52:09.377397 1047159 retry.go:31] will retry after 23.787430998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:10.519478 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:13.019449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:15.518771 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:18.018678 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:20.367361 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:52:20.429944 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:20.430047 1047159 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:20.519535 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:23.019133 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:25.019307 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:27.519242 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:30.018749 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:32.019308 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:33.165778 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:52:33.226192 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:33.226288 1047159 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:52:34.519469 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:37.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:39.519251 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:42.018723 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:44.019269 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:52:46.432093 1047159 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:52:46.497680 1047159 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:52:46.497781 1047159 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:52:46.502938 1047159 out.go:179] * Enabled addons: 
	I1208 01:52:46.505774 1047159 addons.go:530] duration metric: took 1m48.155164419s for enable addons: enabled=[]
	W1208 01:52:46.519375 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:49.018487 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:51.019331 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:53.518707 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:55.519582 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:52:58.019073 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:00.019588 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:02.519532 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:05.023504 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:07.518624 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:09.519024 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:11.519389 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:14.019053 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:16.518622 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:18.519227 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:21.018665 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:23.019558 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:25.519524 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:28.019553 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:30.518668 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:33.018725 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:35.518967 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:37.519455 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:40.018547 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:42.018757 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:44.020627 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:46.518584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:49.018526 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:51.018609 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:53.518615 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:56.018528 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:53:58.018763 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:00.519130 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:03.018606 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:05.518787 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:08.019519 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:10.518446 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:12.518576 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:14.519449 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:17.018671 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:19.518636 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:22.018525 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:24.519178 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:27.018533 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:29.518640 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:31.519126 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:33.519284 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:35.519452 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:38.018667 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:40.518930 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:42.519454 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:45.018619 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:54:49.847048 1039943 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:54:49.847077 1039943 kubeadm.go:319] 
	I1208 01:54:49.847149 1039943 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:54:49.852553 1039943 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:54:49.852619 1039943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:54:49.852721 1039943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:54:49.852785 1039943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:54:49.852825 1039943 kubeadm.go:319] OS: Linux
	I1208 01:54:49.852870 1039943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:54:49.852918 1039943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:54:49.852965 1039943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:54:49.853013 1039943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:54:49.853072 1039943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:54:49.853130 1039943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:54:49.853178 1039943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:54:49.853231 1039943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:54:49.853284 1039943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:54:49.853372 1039943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:54:49.853474 1039943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:54:49.853612 1039943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:54:49.853714 1039943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:54:49.856709 1039943 out.go:252]   - Generating certificates and keys ...
	I1208 01:54:49.856814 1039943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:54:49.856895 1039943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:54:49.856984 1039943 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:54:49.857061 1039943 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:54:49.857172 1039943 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:54:49.857232 1039943 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:54:49.857326 1039943 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:54:49.857415 1039943 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:54:49.857499 1039943 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:54:49.857603 1039943 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:54:49.857682 1039943 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:54:49.857823 1039943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:54:49.857891 1039943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:54:49.857959 1039943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:54:49.858019 1039943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:54:49.858108 1039943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:54:49.858191 1039943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:54:49.858305 1039943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:54:49.858378 1039943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1208 01:54:47.019380 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:49.518741 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:54:49.863237 1039943 out.go:252]   - Booting up control plane ...
	I1208 01:54:49.863352 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:54:49.863438 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:54:49.863515 1039943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:54:49.863629 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:54:49.863729 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:54:49.863835 1039943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:54:49.863923 1039943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:54:49.863965 1039943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:54:49.864100 1039943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:54:49.864207 1039943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:54:49.864274 1039943 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000263477s
	I1208 01:54:49.864282 1039943 kubeadm.go:319] 
	I1208 01:54:49.864339 1039943 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:54:49.864374 1039943 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:54:49.864481 1039943 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:54:49.864489 1039943 kubeadm.go:319] 
	I1208 01:54:49.864593 1039943 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:54:49.864629 1039943 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:54:49.864662 1039943 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:54:49.864736 1039943 kubeadm.go:403] duration metric: took 8m7.244236129s to StartCluster
	I1208 01:54:49.864786 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.864852 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.864950 1039943 kubeadm.go:319] 
	I1208 01:54:49.890049 1039943 cri.go:89] found id: ""
	I1208 01:54:49.890071 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.890079 1039943 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.890086 1039943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.890149 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.915976 1039943 cri.go:89] found id: ""
	I1208 01:54:49.916000 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.916009 1039943 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.916015 1039943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.916071 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.940080 1039943 cri.go:89] found id: ""
	I1208 01:54:49.940104 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.940113 1039943 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.940119 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.940181 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.964287 1039943 cri.go:89] found id: ""
	I1208 01:54:49.964311 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.964320 1039943 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.964327 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.964382 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.987947 1039943 cri.go:89] found id: ""
	I1208 01:54:49.987971 1039943 logs.go:282] 0 containers: []
	W1208 01:54:49.987979 1039943 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.987986 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.988043 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:50.047343 1039943 cri.go:89] found id: ""
	I1208 01:54:50.047419 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.047442 1039943 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:50.047460 1039943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:50.047550 1039943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:50.093548 1039943 cri.go:89] found id: ""
	I1208 01:54:50.093623 1039943 logs.go:282] 0 containers: []
	W1208 01:54:50.093648 1039943 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:50.093671 1039943 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:54:50.093712 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:54:50.130017 1039943 logs.go:123] Gathering logs for container status ...
	I1208 01:54:50.130054 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:50.161671 1039943 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:50.161708 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:50.226635 1039943 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:50.226672 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:50.244811 1039943 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:50.244841 1039943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:50.311616 1039943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:50.303390    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.304170    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.305767    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.306103    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:50.307608    4951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 01:54:50.311639 1039943 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:54:50.311681 1039943 out.go:285] * 
	W1208 01:54:50.311744 1039943 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.311758 1039943 out.go:285] * 
	W1208 01:54:50.313886 1039943 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:54:50.318878 1039943 out.go:203] 
	W1208 01:54:50.321774 1039943 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000263477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:54:50.321820 1039943 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:54:50.321849 1039943 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:54:50.324970 1039943 out.go:203] 
	W1208 01:54:51.518789 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:54.018519 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:56.518491 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:54:58.519112 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:01.018584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:03.018643 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:05.518505 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:08.018536 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:10.019472 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:12.518502 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:14.518620 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:17.019504 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:19.518721 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:22.018570 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:24.518558 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:27.018585 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:29.018662 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:31.518652 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:34.018502 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:36.018610 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:38.518693 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:40.518763 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:43.018606 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:45.018797 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:47.019374 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:49.518685 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:51.519142 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:54.018934 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:56.518500 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:55:58.518701 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:00.518894 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:02.519392 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:05.018526 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:07.018584 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:09.018734 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:11.518579 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:13.519381 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:16.019530 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:18.518601 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:20.518909 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:23.019357 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:25.519360 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:28.019395 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:30.518961 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:32.519312 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:34.519397 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995693002Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995845865Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.995899314Z" level=info msg="Create NRI interface"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996004997Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996019094Z" level=info msg="runtime interface created"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996030893Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996036965Z" level=info msg="runtime interface starting up..."
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996043094Z" level=info msg="starting plugins..."
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996057051Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:46:40 newest-cni-448023 crio[835]: time="2025-12-08T01:46:40.996114184Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:46:41 newest-cni-448023 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.917598608Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5e98e09d-a44c-41b8-bd17-ee1e89caeca7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.918797816Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4ae09c2b-38fe-49bc-adec-8679491342d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.919392727Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4295c61d-2edc-437f-b3be-0511120d5e2a name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.919923876Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b4468830-6f64-4d59-9957-ebdf2a248a38 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.920449551Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e0594767-1f3d-4735-bb7d-1040225db3f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.920903587Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3d7fe893-2bc7-4d91-87b7-a65a8217a281 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:46:42 newest-cni-448023 crio[835]: time="2025-12-08T01:46:42.921358533Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=d32647af-17c0-43e7-9e16-c20f911fb4a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.52127883Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d83b1daa-f10f-4f5b-aa76-c9dc4c311d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.521944782Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f6c0a8ec-b34d-45a9-8855-7eea024dac34 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.524798818Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=bf9fcf67-7dde-4bcb-a56c-178e0a20dc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.525251451Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb63c46d-fa6e-43ca-9c60-d985a6518070 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.525684136Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=63014fa8-e5a3-4c50-b262-6167048df68d name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.526643605Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cd7b0552-864c-47d0-badf-c6301cd5e261 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 01:50:47 newest-cni-448023 crio[835]: time="2025-12-08T01:50:47.527333688Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=facf0d08-f50b-421a-a5d2-9dcfee9bdbaa name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:38.848143    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:38.848795    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:38.850391    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:38.851222    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:38.852824    6121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:56:38 up  6:38,  0 user,  load average: 0.07, 0.55, 1.26
	Linux newest-cni-448023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:56:36 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:56:37 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 789.
	Dec 08 01:56:37 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:37 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:37 newest-cni-448023 kubelet[6010]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:37 newest-cni-448023 kubelet[6010]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:37 newest-cni-448023 kubelet[6010]: E1208 01:56:37.316130    6010 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:37 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:37 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 790.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6029]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6029]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6029]: E1208 01:56:38.097744    6029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 791.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6125]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6125]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 01:56:38 newest-cni-448023 kubelet[6125]: E1208 01:56:38.839466    6125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:56:38 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 6 (336.645304ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:56:39.383222 1054722 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-448023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (375.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m10.23584615s)

                                                
                                                
-- stdout --
	* [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:56:40.995814 1055021 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:56:40.995993 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996024 1055021 out.go:374] Setting ErrFile to fd 2...
	I1208 01:56:40.996044 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996297 1055021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:56:40.996698 1055021 out.go:368] Setting JSON to false
	I1208 01:56:40.997651 1055021 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23933,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:56:40.997760 1055021 start.go:143] virtualization:  
	I1208 01:56:41.000930 1055021 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:56:41.005767 1055021 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:56:41.005958 1055021 notify.go:221] Checking for updates...
	I1208 01:56:41.009547 1055021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:56:41.012698 1055021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:41.016029 1055021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:56:41.019114 1055021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:56:41.022081 1055021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:56:41.025425 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:41.026092 1055021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:56:41.062956 1055021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:56:41.063137 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.133740 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.124579493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.133841 1055021 docker.go:319] overlay module found
	I1208 01:56:41.136922 1055021 out.go:179] * Using the docker driver based on existing profile
	I1208 01:56:41.139812 1055021 start.go:309] selected driver: docker
	I1208 01:56:41.139836 1055021 start.go:927] validating driver "docker" against &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.139955 1055021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:56:41.140671 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.193763 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.183682659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.194162 1055021 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:56:41.194196 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:41.194260 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:41.194313 1055021 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.197698 1055021 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:56:41.200489 1055021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:56:41.203470 1055021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:56:41.206341 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:41.206393 1055021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:56:41.206406 1055021 cache.go:65] Caching tarball of preloaded images
	I1208 01:56:41.206414 1055021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:56:41.206514 1055021 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:56:41.206524 1055021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:56:41.206659 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.226393 1055021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:56:41.226417 1055021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:56:41.226437 1055021 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:56:41.226470 1055021 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:56:41.226539 1055021 start.go:364] duration metric: took 45.818µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:56:41.226562 1055021 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:56:41.226569 1055021 fix.go:54] fixHost starting: 
	I1208 01:56:41.226872 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.244524 1055021 fix.go:112] recreateIfNeeded on newest-cni-448023: state=Stopped err=<nil>
	W1208 01:56:41.244564 1055021 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:56:41.247746 1055021 out.go:252] * Restarting existing docker container for "newest-cni-448023" ...
	I1208 01:56:41.247847 1055021 cli_runner.go:164] Run: docker start newest-cni-448023
	I1208 01:56:41.505835 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.523362 1055021 kic.go:430] container "newest-cni-448023" state is running.
	I1208 01:56:41.523773 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:41.545536 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.545777 1055021 machine.go:94] provisionDockerMachine start ...
	I1208 01:56:41.545848 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:41.570998 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:41.571328 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:41.571336 1055021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:56:41.572041 1055021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:56:44.722629 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.722658 1055021 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:56:44.722733 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.743562 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.743889 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.743906 1055021 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:56:44.912657 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.912755 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.930550 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.930902 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.930926 1055021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:56:45.125086 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:56:45.125166 1055021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:56:45.125215 1055021 ubuntu.go:190] setting up certificates
	I1208 01:56:45.125242 1055021 provision.go:84] configureAuth start
	I1208 01:56:45.125340 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:45.146934 1055021 provision.go:143] copyHostCerts
	I1208 01:56:45.147071 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:56:45.147086 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:56:45.147185 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:56:45.147315 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:56:45.147333 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:56:45.147379 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:56:45.147450 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:56:45.147463 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:56:45.147494 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:56:45.147561 1055021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:56:45.319641 1055021 provision.go:177] copyRemoteCerts
	I1208 01:56:45.319718 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:56:45.319771 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.338151 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.446957 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:56:45.464534 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:56:45.481634 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:56:45.499110 1055021 provision.go:87] duration metric: took 373.83191ms to configureAuth
	I1208 01:56:45.499137 1055021 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:56:45.499354 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:45.499462 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.519312 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:45.520323 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:45.520348 1055021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:56:45.838649 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:56:45.838675 1055021 machine.go:97] duration metric: took 4.292880237s to provisionDockerMachine
	I1208 01:56:45.838688 1055021 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:56:45.838701 1055021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:56:45.838764 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:56:45.838808 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.856107 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.962864 1055021 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:56:45.966280 1055021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:56:45.966310 1055021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:56:45.966321 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:56:45.966376 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:56:45.966455 1055021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:56:45.966565 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:56:45.973812 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:45.990960 1055021 start.go:296] duration metric: took 152.256258ms for postStartSetup
	I1208 01:56:45.991062 1055021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:56:45.991102 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.010295 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.111994 1055021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:56:46.116921 1055021 fix.go:56] duration metric: took 4.890342951s for fixHost
	I1208 01:56:46.116949 1055021 start.go:83] releasing machines lock for "newest-cni-448023", held for 4.89039814s
	I1208 01:56:46.117023 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:46.133998 1055021 ssh_runner.go:195] Run: cat /version.json
	I1208 01:56:46.134053 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.134086 1055021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:56:46.134143 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.155007 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.157578 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.259943 1055021 ssh_runner.go:195] Run: systemctl --version
	I1208 01:56:46.363782 1055021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:56:46.401418 1055021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:56:46.405895 1055021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:56:46.406027 1055021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:56:46.414120 1055021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:56:46.414145 1055021 start.go:496] detecting cgroup driver to use...
	I1208 01:56:46.414178 1055021 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:56:46.414240 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:56:46.430116 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:56:46.443306 1055021 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:56:46.443370 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:56:46.459228 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:56:46.472250 1055021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:56:46.583643 1055021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:56:46.702836 1055021 docker.go:234] disabling docker service ...
	I1208 01:56:46.702974 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:56:46.718081 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:56:46.731165 1055021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:56:46.841278 1055021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:56:46.959396 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:56:46.972986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:56:46.988672 1055021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:56:46.988773 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:46.998541 1055021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:56:46.998635 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.012333 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.022719 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.033036 1055021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:56:47.042410 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.053356 1055021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.066055 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.076106 1055021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:56:47.083610 1055021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:56:47.090937 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.204760 1055021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:56:47.377268 1055021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:56:47.377383 1055021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:56:47.381048 1055021 start.go:564] Will wait 60s for crictl version
	I1208 01:56:47.381161 1055021 ssh_runner.go:195] Run: which crictl
	I1208 01:56:47.384529 1055021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:56:47.407415 1055021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:56:47.407590 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.438310 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.480028 1055021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:56:47.482931 1055021 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:56:47.498300 1055021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:56:47.502114 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.515024 1055021 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:56:47.517850 1055021 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:56:47.518007 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:47.518083 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.554783 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.554810 1055021 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:56:47.554891 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.580370 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.580396 1055021 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:56:47.580404 1055021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:56:47.580497 1055021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:56:47.580581 1055021 ssh_runner.go:195] Run: crio config
	I1208 01:56:47.630652 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:47.630677 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:47.630697 1055021 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:56:47.630720 1055021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:56:47.630943 1055021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:56:47.631027 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:56:47.638867 1055021 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:56:47.638960 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:56:47.646535 1055021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:56:47.659466 1055021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:56:47.672488 1055021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:56:47.685612 1055021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:56:47.689373 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.699289 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.852921 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:47.877101 1055021 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:56:47.877130 1055021 certs.go:195] generating shared ca certs ...
	I1208 01:56:47.877147 1055021 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:47.877305 1055021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:56:47.877358 1055021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:56:47.877370 1055021 certs.go:257] generating profile certs ...
	I1208 01:56:47.877482 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:56:47.877551 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:56:47.877603 1055021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:56:47.877731 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:56:47.877771 1055021 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:56:47.877792 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:56:47.877831 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:56:47.877859 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:56:47.877890 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:56:47.877943 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:47.879217 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:56:47.903514 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:56:47.922072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:56:47.939555 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:56:47.956891 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:56:47.976072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:56:47.994485 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:56:48.016256 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:56:48.036003 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:56:48.058425 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:56:48.078107 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:56:48.096426 1055021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:56:48.110183 1055021 ssh_runner.go:195] Run: openssl version
	I1208 01:56:48.117292 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.125194 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:56:48.133030 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136789 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136880 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.178238 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:56:48.186394 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.194429 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:56:48.203481 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207582 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207655 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.249053 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:56:48.257115 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.265010 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:56:48.272913 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276751 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276818 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.318199 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:56:48.326277 1055021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:56:48.330322 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:56:48.371576 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:56:48.412414 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:56:48.454546 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:56:48.499800 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:56:48.544265 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:56:48.590374 1055021 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:48.590473 1055021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:56:48.590547 1055021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:56:48.619202 1055021 cri.go:89] found id: ""
	I1208 01:56:48.619330 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:56:48.627096 1055021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:56:48.627120 1055021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:56:48.627172 1055021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:56:48.634458 1055021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:56:48.635058 1055021 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.635319 1055021 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-448023" cluster setting kubeconfig missing "newest-cni-448023" context setting]
	I1208 01:56:48.635800 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.637157 1055021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:56:48.644838 1055021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:56:48.644913 1055021 kubeadm.go:602] duration metric: took 17.785882ms to restartPrimaryControlPlane
	I1208 01:56:48.644930 1055021 kubeadm.go:403] duration metric: took 54.567759ms to StartCluster
	I1208 01:56:48.644947 1055021 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.645007 1055021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.645870 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.646084 1055021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:56:48.646389 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:48.646439 1055021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:56:48.646504 1055021 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-448023"
	I1208 01:56:48.646529 1055021 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-448023"
	I1208 01:56:48.646555 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647285 1055021 addons.go:70] Setting dashboard=true in profile "newest-cni-448023"
	I1208 01:56:48.647305 1055021 addons.go:239] Setting addon dashboard=true in "newest-cni-448023"
	W1208 01:56:48.647311 1055021 addons.go:248] addon dashboard should already be in state true
	I1208 01:56:48.647331 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.647957 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.648448 1055021 addons.go:70] Setting default-storageclass=true in profile "newest-cni-448023"
	I1208 01:56:48.648476 1055021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-448023"
	I1208 01:56:48.648734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.651945 1055021 out.go:179] * Verifying Kubernetes components...
	I1208 01:56:48.654867 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:48.684864 1055021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:56:48.691009 1055021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:56:48.694226 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:56:48.694251 1055021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:56:48.694323 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.695436 1055021 addons.go:239] Setting addon default-storageclass=true in "newest-cni-448023"
	I1208 01:56:48.695482 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.695884 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.701699 1055021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:56:48.704558 1055021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.704591 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:56:48.704655 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.736846 1055021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.736869 1055021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:56:48.736936 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.742543 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.766983 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.785430 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.885046 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:48.955470 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:56:48.955498 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:56:48.963459 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.965887 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.978338 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:56:48.978366 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:56:49.016188 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:56:49.016210 1055021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:56:49.061303 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:56:49.061328 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:56:49.074921 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:56:49.074987 1055021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:56:49.087412 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:56:49.087487 1055021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:56:49.099641 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:56:49.099667 1055021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:56:49.112487 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:56:49.112550 1055021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:56:49.125264 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.125288 1055021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:56:49.138335 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.508759 1055021 api_server.go:52] waiting for apiserver process to appear ...
	W1208 01:56:49.508918 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509385 1055021 retry.go:31] will retry after 199.05184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509006 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509406 1055021 retry.go:31] will retry after 322.784094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509263 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509418 1055021 retry.go:31] will retry after 353.691521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509538 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:49.709327 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:49.771304 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.771383 1055021 retry.go:31] will retry after 463.845922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.832454 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:49.863948 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:49.893225 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.893260 1055021 retry.go:31] will retry after 412.627767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.933504 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.933538 1055021 retry.go:31] will retry after 461.252989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.009945 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.235907 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:50.306466 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:50.322038 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.322071 1055021 retry.go:31] will retry after 523.830022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:50.380008 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.380051 1055021 retry.go:31] will retry after 753.154513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.395255 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:50.456642 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.456676 1055021 retry.go:31] will retry after 803.433098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.509737 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.846838 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:50.908365 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.908408 1055021 retry.go:31] will retry after 671.521026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.009996 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.134042 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.192423 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.192455 1055021 retry.go:31] will retry after 689.227768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.260665 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:51.319134 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.319182 1055021 retry.go:31] will retry after 541.526321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.509442 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.580384 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:51.640452 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.640485 1055021 retry.go:31] will retry after 844.977075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.861863 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:51.882351 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.944280 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.944321 1055021 retry.go:31] will retry after 1.000499188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.967122 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.967155 1055021 retry.go:31] will retry after 859.890122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.010305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:52.486447 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:52.510056 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:56:52.585753 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.585816 1055021 retry.go:31] will retry after 1.004705222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.828167 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:52.886091 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.886122 1055021 retry.go:31] will retry after 2.82316744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.945292 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:53.006627 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.006710 1055021 retry.go:31] will retry after 2.04955933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.009824 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.510073 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.591501 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:53.650678 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.650712 1055021 retry.go:31] will retry after 3.502569911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:54.010159 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:54.509667 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.009590 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.057336 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:55.132269 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.132307 1055021 retry.go:31] will retry after 2.513983979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.509439 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.710171 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:55.769058 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.769091 1055021 retry.go:31] will retry after 2.669645777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:56.009694 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.509523 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.010140 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.153585 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:57.218181 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.218214 1055021 retry.go:31] will retry after 3.909169329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.647096 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:57.710136 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.710169 1055021 retry.go:31] will retry after 4.894098122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.009665 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:58.439443 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:58.505497 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.505529 1055021 retry.go:31] will retry after 6.007342944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.009469 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.510388 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.015300 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.509494 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.010257 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.128215 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:01.190419 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.190453 1055021 retry.go:31] will retry after 9.504933562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.509623 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.009676 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.509462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.605116 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:02.675800 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:02.675835 1055021 retry.go:31] will retry after 6.984717516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:03.009407 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:03.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.015233 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.509531 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.514060 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:04.574188 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:04.574220 1055021 retry.go:31] will retry after 6.522846226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:05.012398 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.509759 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.010229 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.509419 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.009462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.510275 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.010363 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.010036 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.509454 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.661163 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:09.722054 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:09.722085 1055021 retry.go:31] will retry after 5.465119302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.010374 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.510222 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.696134 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:10.771084 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.771123 1055021 retry.go:31] will retry after 11.695285792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.009829 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.098157 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:11.159270 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.159302 1055021 retry.go:31] will retry after 8.417822009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.509651 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.010126 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.009464 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.510317 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.009529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.510393 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.009573 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.188355 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:15.251108 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.251147 1055021 retry.go:31] will retry after 12.201311078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.509570 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.009635 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.009802 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.510253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.509509 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.009459 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.509684 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.577986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:19.638356 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:19.638389 1055021 retry.go:31] will retry after 8.001395588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:20.012301 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.509725 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.010367 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.509456 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.009599 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.467388 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:57:22.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:22.532031 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:22.532062 1055021 retry.go:31] will retry after 11.135828112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:23.009468 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.509432 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.010095 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.510255 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.012400 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.010403 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.452716 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:27.510223 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:27.519149 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.519184 1055021 retry.go:31] will retry after 13.452567778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.640862 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:27.703487 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.703522 1055021 retry.go:31] will retry after 26.167048463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:28.009930 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:28.509594 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.009708 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.510396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.009745 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.010280 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.010087 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.509477 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.010351 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.509804 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.668898 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:33.729185 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:33.729219 1055021 retry.go:31] will retry after 25.894597219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:34.009473 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:34.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.010355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.010451 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.509505 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.009541 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.509700 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.014196 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.509592 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.010217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.510250 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.015373 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.510349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.972256 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:41.009839 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:41.066333 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.066366 1055021 retry.go:31] will retry after 34.953666856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.509748 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.009596 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.509438 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.009956 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.510378 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.009680 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.012784 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.510247 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.010335 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.509529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.009480 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.509657 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.009556 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.509689 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.009367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.009459 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.046711 1055021 cri.go:89] found id: ""
	I1208 01:57:49.046741 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.046749 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.046756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.046829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.086414 1055021 cri.go:89] found id: ""
	I1208 01:57:49.086435 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.086443 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.086449 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.086517 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:49.111234 1055021 cri.go:89] found id: ""
	I1208 01:57:49.111256 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.111264 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:49.111270 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:49.111328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:49.135868 1055021 cri.go:89] found id: ""
	I1208 01:57:49.135890 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.135899 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:49.135905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:49.135966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:49.161459 1055021 cri.go:89] found id: ""
	I1208 01:57:49.161482 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.161490 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:49.161496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:49.161557 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:49.186397 1055021 cri.go:89] found id: ""
	I1208 01:57:49.186421 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.186430 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:49.186436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:49.186542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:49.213171 1055021 cri.go:89] found id: ""
	I1208 01:57:49.213192 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.213201 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:49.213207 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:49.213265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:49.239381 1055021 cri.go:89] found id: ""
	I1208 01:57:49.239451 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.239484 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:49.239500 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:49.239512 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:49.311423 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:49.311459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:49.331846 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:49.331876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:49.396868 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:49.396933 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:49.396954 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:49.425376 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:49.425412 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:51.956807 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:51.967366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:51.967435 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:51.995332 1055021 cri.go:89] found id: ""
	I1208 01:57:51.995356 1055021 logs.go:282] 0 containers: []
	W1208 01:57:51.995364 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:51.995371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:51.995429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.032087 1055021 cri.go:89] found id: ""
	I1208 01:57:52.032112 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.032121 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.032128 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.032190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.066375 1055021 cri.go:89] found id: ""
	I1208 01:57:52.066403 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.066412 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.066420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.066490 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:52.098263 1055021 cri.go:89] found id: ""
	I1208 01:57:52.098291 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.098300 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:52.098306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:52.098376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:52.125642 1055021 cri.go:89] found id: ""
	I1208 01:57:52.125672 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.125681 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:52.125688 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:52.125750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:52.155324 1055021 cri.go:89] found id: ""
	I1208 01:57:52.155348 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.155356 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:52.155363 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:52.155424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:52.180558 1055021 cri.go:89] found id: ""
	I1208 01:57:52.180625 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.180647 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:52.180659 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:52.180742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:52.209892 1055021 cri.go:89] found id: ""
	I1208 01:57:52.209921 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.209930 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:52.209940 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:52.209951 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:52.237887 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:52.237925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:52.279083 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:52.279113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:52.360508 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:52.360547 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:52.379387 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:52.379417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:52.443498 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.871074 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:53.931966 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:53.931998 1055021 retry.go:31] will retry after 33.054913046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:54.943790 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:54.955406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:54.955477 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:54.980272 1055021 cri.go:89] found id: ""
	I1208 01:57:54.980295 1055021 logs.go:282] 0 containers: []
	W1208 01:57:54.980303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:54.980310 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:54.980377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.016873 1055021 cri.go:89] found id: ""
	I1208 01:57:55.016950 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.016973 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.016992 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.017116 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.055884 1055021 cri.go:89] found id: ""
	I1208 01:57:55.055905 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.055914 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.055920 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.055979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.085540 1055021 cri.go:89] found id: ""
	I1208 01:57:55.085561 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.085569 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.085576 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.085641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.111356 1055021 cri.go:89] found id: ""
	I1208 01:57:55.111378 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.111386 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.111393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.111473 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:55.137620 1055021 cri.go:89] found id: ""
	I1208 01:57:55.137643 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.137651 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:55.137657 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:55.137717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:55.162561 1055021 cri.go:89] found id: ""
	I1208 01:57:55.162626 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.162650 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:55.162667 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:55.162751 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:55.188593 1055021 cri.go:89] found id: ""
	I1208 01:57:55.188658 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.188683 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:55.188697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:55.188744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:55.254035 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:55.254057 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:55.254081 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:55.286453 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:55.286528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:55.320738 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:55.320762 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.387748 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:55.387783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:57.905905 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:57.918662 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:57.918736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:57.946026 1055021 cri.go:89] found id: ""
	I1208 01:57:57.946049 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.946058 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:57.946065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:57.946124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:57.971642 1055021 cri.go:89] found id: ""
	I1208 01:57:57.971669 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.971678 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:57.971685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:57.971744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.007407 1055021 cri.go:89] found id: ""
	I1208 01:57:58.007432 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.007441 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.007447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.007523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.050421 1055021 cri.go:89] found id: ""
	I1208 01:57:58.050442 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.050450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.050457 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.050518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.083694 1055021 cri.go:89] found id: ""
	I1208 01:57:58.083719 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.083728 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.083741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.083800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.110828 1055021 cri.go:89] found id: ""
	I1208 01:57:58.110874 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.110882 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.110899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.110974 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.136277 1055021 cri.go:89] found id: ""
	I1208 01:57:58.136302 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.136310 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.136317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.136378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:58.162168 1055021 cri.go:89] found id: ""
	I1208 01:57:58.162234 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.162258 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:58.162280 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:58.162304 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.191089 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:58.191121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:58.262015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:58.262058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:58.282086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:58.282121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:58.355880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:58.355910 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:58.355926 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:59.624913 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:59.684883 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:59.684920 1055021 retry.go:31] will retry after 39.668120724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:00.884752 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:00.909814 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:00.909896 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:00.936313 1055021 cri.go:89] found id: ""
	I1208 01:58:00.936344 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.936353 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:00.936360 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:00.936420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:00.966288 1055021 cri.go:89] found id: ""
	I1208 01:58:00.966355 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.966376 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:00.966394 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:00.966483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:00.992494 1055021 cri.go:89] found id: ""
	I1208 01:58:00.992526 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.992536 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:00.992543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:00.992608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.026941 1055021 cri.go:89] found id: ""
	I1208 01:58:01.026969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.026979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.026985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.027057 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.058196 1055021 cri.go:89] found id: ""
	I1208 01:58:01.058224 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.058233 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.058239 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.058301 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.086997 1055021 cri.go:89] found id: ""
	I1208 01:58:01.087025 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.087034 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.087042 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.087124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.113372 1055021 cri.go:89] found id: ""
	I1208 01:58:01.113401 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.113411 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.113417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.113480 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.140687 1055021 cri.go:89] found id: ""
	I1208 01:58:01.140717 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.140726 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.140736 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.140747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:01.211011 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:01.211061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:01.229916 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:01.229948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:01.319423 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:01.319443 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:01.319455 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:01.349176 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:01.349213 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:03.883281 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:03.894087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:03.894159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:03.919271 1055021 cri.go:89] found id: ""
	I1208 01:58:03.919294 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.919302 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:03.919309 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:03.919367 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:03.944356 1055021 cri.go:89] found id: ""
	I1208 01:58:03.944379 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.944387 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:03.944393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:03.944456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:03.969863 1055021 cri.go:89] found id: ""
	I1208 01:58:03.969890 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.969900 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:03.969907 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:03.969981 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:03.995306 1055021 cri.go:89] found id: ""
	I1208 01:58:03.995328 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.995336 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:03.995344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:03.995402 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.037050 1055021 cri.go:89] found id: ""
	I1208 01:58:04.037079 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.037089 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.037096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.037159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.081029 1055021 cri.go:89] found id: ""
	I1208 01:58:04.081057 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.081066 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.081073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.081139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.111984 1055021 cri.go:89] found id: ""
	I1208 01:58:04.112005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.112013 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.112020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.112079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.140750 1055021 cri.go:89] found id: ""
	I1208 01:58:04.140776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.140784 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.140793 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.140805 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.207146 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.207183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:04.225030 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:04.225061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:04.295674 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:04.295696 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:04.295708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:04.326962 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.327003 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:06.859119 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:06.871159 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:06.871236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:06.901570 1055021 cri.go:89] found id: ""
	I1208 01:58:06.901594 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.901603 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:06.901618 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:06.901681 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:06.930193 1055021 cri.go:89] found id: ""
	I1208 01:58:06.930220 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.930229 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:06.930235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:06.930298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:06.955159 1055021 cri.go:89] found id: ""
	I1208 01:58:06.955188 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.955197 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:06.955205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:06.955278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:06.980007 1055021 cri.go:89] found id: ""
	I1208 01:58:06.980031 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.980040 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:06.980046 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:06.980103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.017391 1055021 cri.go:89] found id: ""
	I1208 01:58:07.017417 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.017425 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.017432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.017495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.048550 1055021 cri.go:89] found id: ""
	I1208 01:58:07.048577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.048586 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.048596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.048659 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.080691 1055021 cri.go:89] found id: ""
	I1208 01:58:07.080759 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.080783 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.080796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.080874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.105849 1055021 cri.go:89] found id: ""
	I1208 01:58:07.105925 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.105948 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.105971 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:07.106012 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:07.138653 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.138732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.206905 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.206940 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.224653 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.224683 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.303888 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.303912 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:07.303925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:09.834549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:09.845152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:09.845227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:09.870225 1055021 cri.go:89] found id: ""
	I1208 01:58:09.870251 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.870259 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:09.870268 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:09.870330 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:09.896168 1055021 cri.go:89] found id: ""
	I1208 01:58:09.896191 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.896200 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:09.896206 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:09.896269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:09.922117 1055021 cri.go:89] found id: ""
	I1208 01:58:09.922140 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.922149 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:09.922155 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:09.922215 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:09.947105 1055021 cri.go:89] found id: ""
	I1208 01:58:09.947129 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.947137 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:09.947143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:09.947236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:09.972509 1055021 cri.go:89] found id: ""
	I1208 01:58:09.972535 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.972544 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:09.972551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:09.972609 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.009065 1055021 cri.go:89] found id: ""
	I1208 01:58:10.009097 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.009107 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.009115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.009196 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.052170 1055021 cri.go:89] found id: ""
	I1208 01:58:10.052197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.052206 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.052212 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.052278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.078447 1055021 cri.go:89] found id: ""
	I1208 01:58:10.078472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.078480 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.078489 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:10.078500 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:10.109259 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.109300 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.138226 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.138251 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.204388 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.204424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.222357 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.222398 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.305027 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:12.805305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:12.815949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:12.816024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:12.840507 1055021 cri.go:89] found id: ""
	I1208 01:58:12.840531 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.840540 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:12.840546 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:12.840614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:12.865555 1055021 cri.go:89] found id: ""
	I1208 01:58:12.865580 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.865589 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:12.865595 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:12.865653 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:12.890286 1055021 cri.go:89] found id: ""
	I1208 01:58:12.890311 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.890319 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:12.890325 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:12.890383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:12.915193 1055021 cri.go:89] found id: ""
	I1208 01:58:12.915217 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.915226 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:12.915233 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:12.915291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:12.940889 1055021 cri.go:89] found id: ""
	I1208 01:58:12.940915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.940923 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:12.940931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:12.941011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:12.967233 1055021 cri.go:89] found id: ""
	I1208 01:58:12.967259 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.967268 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:12.967275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:12.967337 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:12.990975 1055021 cri.go:89] found id: ""
	I1208 01:58:12.991001 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.991009 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:12.991016 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:12.991088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.025590 1055021 cri.go:89] found id: ""
	I1208 01:58:13.025616 1055021 logs.go:282] 0 containers: []
	W1208 01:58:13.025625 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.025634 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.025646 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.063362 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.063391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.134922 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.134959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.153025 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.153060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.215226 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.215246 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:13.215258 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:15.744740 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:15.755312 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:15.755383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:15.780891 1055021 cri.go:89] found id: ""
	I1208 01:58:15.780915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.780923 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:15.780930 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:15.780989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:15.806161 1055021 cri.go:89] found id: ""
	I1208 01:58:15.806185 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.806194 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:15.806200 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:15.806257 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:15.831178 1055021 cri.go:89] found id: ""
	I1208 01:58:15.831197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.831205 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:15.831211 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:15.831269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:15.856130 1055021 cri.go:89] found id: ""
	I1208 01:58:15.856155 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.856164 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:15.856171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:15.856232 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:15.885064 1055021 cri.go:89] found id: ""
	I1208 01:58:15.885136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.885159 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:15.885177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:15.885270 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:15.912595 1055021 cri.go:89] found id: ""
	I1208 01:58:15.912623 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.912631 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:15.912638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:15.912700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:15.936650 1055021 cri.go:89] found id: ""
	I1208 01:58:15.936677 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.936686 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:15.936692 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:15.936752 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:15.962329 1055021 cri.go:89] found id: ""
	I1208 01:58:15.962350 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.962358 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:15.962367 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:15.962378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 01:58:16.020986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:16.067660 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.067744 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:16.067772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1208 01:58:16.112099 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.112132 1055021 retry.go:31] will retry after 29.72360839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.126560 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.126615 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.157854 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.157883 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.223999 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.224035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:18.742355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:18.752998 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:18.753077 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:18.778077 1055021 cri.go:89] found id: ""
	I1208 01:58:18.778099 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.778107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:18.778114 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:18.778171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:18.802643 1055021 cri.go:89] found id: ""
	I1208 01:58:18.802665 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.802673 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:18.802679 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:18.802736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:18.827413 1055021 cri.go:89] found id: ""
	I1208 01:58:18.827441 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.827450 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:18.827456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:18.827514 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:18.852593 1055021 cri.go:89] found id: ""
	I1208 01:58:18.852618 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.852627 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:18.852634 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:18.852694 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:18.877850 1055021 cri.go:89] found id: ""
	I1208 01:58:18.877876 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.877884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:18.877891 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:18.877949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:18.906907 1055021 cri.go:89] found id: ""
	I1208 01:58:18.906930 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.906938 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:18.906945 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:18.907007 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:18.932699 1055021 cri.go:89] found id: ""
	I1208 01:58:18.932723 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.932733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:18.932739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:18.932802 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:18.958426 1055021 cri.go:89] found id: ""
	I1208 01:58:18.958448 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.958456 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:18.958465 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:18.958476 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.023824 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.023904 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.043811 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.043946 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.116236 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.116259 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:19.116273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:19.145950 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.145986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:21.678015 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:21.689017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:21.689107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:21.714453 1055021 cri.go:89] found id: ""
	I1208 01:58:21.714513 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.714522 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:21.714529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:21.714590 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:21.738662 1055021 cri.go:89] found id: ""
	I1208 01:58:21.738688 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.738697 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:21.738703 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:21.738765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:21.763648 1055021 cri.go:89] found id: ""
	I1208 01:58:21.763684 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.763693 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:21.763700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:21.763768 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:21.789120 1055021 cri.go:89] found id: ""
	I1208 01:58:21.789142 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.789150 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:21.789156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:21.789212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:21.814445 1055021 cri.go:89] found id: ""
	I1208 01:58:21.814466 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.814474 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:21.814480 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:21.814538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:21.843027 1055021 cri.go:89] found id: ""
	I1208 01:58:21.843061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.843070 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:21.843078 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:21.843139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:21.872604 1055021 cri.go:89] found id: ""
	I1208 01:58:21.872632 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.872640 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:21.872647 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:21.872725 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:21.898190 1055021 cri.go:89] found id: ""
	I1208 01:58:21.898225 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.898233 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:21.898258 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:21.898274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:21.963735 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:21.963774 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:21.981549 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:21.981580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.065337 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.065359 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:22.065373 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:22.096383 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.096419 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:24.626630 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:24.637406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:24.637484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:24.662982 1055021 cri.go:89] found id: ""
	I1208 01:58:24.663005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.663014 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:24.663020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:24.663088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:24.687863 1055021 cri.go:89] found id: ""
	I1208 01:58:24.687887 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.687897 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:24.687904 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:24.687965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:24.713087 1055021 cri.go:89] found id: ""
	I1208 01:58:24.713110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.713119 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:24.713125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:24.713185 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:24.738346 1055021 cri.go:89] found id: ""
	I1208 01:58:24.738369 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.738378 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:24.738385 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:24.738451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:24.764281 1055021 cri.go:89] found id: ""
	I1208 01:58:24.764309 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.764317 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:24.764323 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:24.764382 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:24.788244 1055021 cri.go:89] found id: ""
	I1208 01:58:24.788267 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.788276 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:24.788282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:24.788358 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:24.812521 1055021 cri.go:89] found id: ""
	I1208 01:58:24.812544 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.812553 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:24.812559 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:24.812620 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:24.837747 1055021 cri.go:89] found id: ""
	I1208 01:58:24.837772 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.837781 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:24.837790 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:24.837804 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:24.903152 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:24.903189 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:24.920792 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:24.920824 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:24.987709 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:24.987780 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:24.987806 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:25.019693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.019773 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:26.987306 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:58:27.057603 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:27.057721 1055021 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:27.560847 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:27.570936 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:27.571004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:27.595473 1055021 cri.go:89] found id: ""
	I1208 01:58:27.595497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.595505 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:27.595512 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:27.595577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:27.620674 1055021 cri.go:89] found id: ""
	I1208 01:58:27.620696 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.620704 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:27.620710 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:27.620766 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:27.646168 1055021 cri.go:89] found id: ""
	I1208 01:58:27.646192 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.646202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:27.646208 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:27.646283 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:27.671472 1055021 cri.go:89] found id: ""
	I1208 01:58:27.671549 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.671564 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:27.671572 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:27.671632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:27.699385 1055021 cri.go:89] found id: ""
	I1208 01:58:27.699409 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.699417 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:27.699423 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:27.699492 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:27.726912 1055021 cri.go:89] found id: ""
	I1208 01:58:27.726937 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.726946 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:27.726953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:27.727011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:27.752037 1055021 cri.go:89] found id: ""
	I1208 01:58:27.752061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.752070 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:27.752076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:27.752139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:27.777018 1055021 cri.go:89] found id: ""
	I1208 01:58:27.777081 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.777097 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:27.777106 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:27.777119 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:27.845091 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:27.845115 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:27.845129 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:27.873750 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:27.873794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:27.906540 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:27.906569 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:27.986314 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:27.986360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.504860 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:30.520332 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:30.520426 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:30.558545 1055021 cri.go:89] found id: ""
	I1208 01:58:30.558574 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.558589 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:30.558596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:30.558670 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:30.587958 1055021 cri.go:89] found id: ""
	I1208 01:58:30.587979 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.587988 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:30.587994 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:30.588055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:30.613947 1055021 cri.go:89] found id: ""
	I1208 01:58:30.613969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.613977 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:30.613983 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:30.614048 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:30.639872 1055021 cri.go:89] found id: ""
	I1208 01:58:30.639899 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.639908 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:30.639916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:30.639975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:30.664766 1055021 cri.go:89] found id: ""
	I1208 01:58:30.664789 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.664797 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:30.664804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:30.664862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:30.694045 1055021 cri.go:89] found id: ""
	I1208 01:58:30.694110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.694130 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:30.694149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:30.694238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:30.719821 1055021 cri.go:89] found id: ""
	I1208 01:58:30.719843 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.719851 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:30.719857 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:30.719915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:30.745151 1055021 cri.go:89] found id: ""
	I1208 01:58:30.745176 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.745185 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:30.745194 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:30.745206 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:30.808884 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:30.808918 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.826624 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:30.826650 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:30.895279 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:30.895304 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:30.895317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:30.927429 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:30.927478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:33.458304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:33.468970 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:33.469040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:33.493566 1055021 cri.go:89] found id: ""
	I1208 01:58:33.493592 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.493601 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:33.493608 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:33.493669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:33.526608 1055021 cri.go:89] found id: ""
	I1208 01:58:33.526630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.526638 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:33.526644 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:33.526705 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:33.560265 1055021 cri.go:89] found id: ""
	I1208 01:58:33.560287 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.560295 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:33.560301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:33.560376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:33.588803 1055021 cri.go:89] found id: ""
	I1208 01:58:33.588830 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.588839 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:33.588846 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:33.588908 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:33.614585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.614610 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.614619 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:33.614625 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:33.614684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:33.638894 1055021 cri.go:89] found id: ""
	I1208 01:58:33.638917 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.638926 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:33.638933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:33.638991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:33.664714 1055021 cri.go:89] found id: ""
	I1208 01:58:33.664736 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.664744 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:33.664752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:33.664814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:33.689585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.689611 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.689620 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:33.689629 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:33.689641 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:33.753906 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:33.753942 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:33.771754 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:33.771783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:33.841023 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:33.841047 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:33.841060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:33.868853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:33.868891 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.397728 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.410372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.410443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:36.441015 1055021 cri.go:89] found id: ""
	I1208 01:58:36.441041 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.441049 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:36.441055 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:36.441117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:36.466353 1055021 cri.go:89] found id: ""
	I1208 01:58:36.466386 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.466395 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:36.466401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:36.466463 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:36.491643 1055021 cri.go:89] found id: ""
	I1208 01:58:36.491670 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.491679 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:36.491685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:36.491743 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:36.531444 1055021 cri.go:89] found id: ""
	I1208 01:58:36.531472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.531480 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:36.531487 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:36.531551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:36.561863 1055021 cri.go:89] found id: ""
	I1208 01:58:36.561891 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.561900 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:36.561906 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:36.561965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:36.598817 1055021 cri.go:89] found id: ""
	I1208 01:58:36.598868 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.598877 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:36.598884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:36.598953 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:36.625352 1055021 cri.go:89] found id: ""
	I1208 01:58:36.625392 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.625402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:36.625408 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:36.625478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:36.649929 1055021 cri.go:89] found id: ""
	I1208 01:58:36.649961 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.649969 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:36.649979 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:36.649991 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:36.717242 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:36.717272 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:36.717284 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:36.745340 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:36.745375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.772396 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:36.772423 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:36.840336 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:36.840375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.353819 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:58:39.359310 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:58:39.415165 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:39.415265 1055021 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:39.415318 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.415380 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.440780 1055021 cri.go:89] found id: ""
	I1208 01:58:39.440802 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.440817 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.440824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.440883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:39.469267 1055021 cri.go:89] found id: ""
	I1208 01:58:39.469293 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.469302 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:39.469308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:39.469369 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:39.497131 1055021 cri.go:89] found id: ""
	I1208 01:58:39.497154 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.497162 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:39.497171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:39.497229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:39.533641 1055021 cri.go:89] found id: ""
	I1208 01:58:39.533666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.533675 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:39.533683 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:39.533741 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:39.569861 1055021 cri.go:89] found id: ""
	I1208 01:58:39.569884 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.569893 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:39.569900 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:39.569959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:39.598670 1055021 cri.go:89] found id: ""
	I1208 01:58:39.598694 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.598702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:39.598709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:39.598770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:39.623360 1055021 cri.go:89] found id: ""
	I1208 01:58:39.623384 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.623392 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:39.623398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:39.623464 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:39.647840 1055021 cri.go:89] found id: ""
	I1208 01:58:39.647864 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.647873 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:39.647881 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:39.647893 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:39.711466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:39.711505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.728921 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:39.728950 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:39.792077 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:39.792097 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:39.792111 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:39.819026 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:39.819064 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.348228 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.359751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.359835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.385781 1055021 cri.go:89] found id: ""
	I1208 01:58:42.385808 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.385818 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.385824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.385884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:42.412513 1055021 cri.go:89] found id: ""
	I1208 01:58:42.412540 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.412555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:42.412562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:42.412621 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:42.439136 1055021 cri.go:89] found id: ""
	I1208 01:58:42.439202 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.439217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:42.439223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:42.439297 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:42.468994 1055021 cri.go:89] found id: ""
	I1208 01:58:42.469069 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.469092 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:42.469105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:42.469190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:42.493446 1055021 cri.go:89] found id: ""
	I1208 01:58:42.493481 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.493489 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:42.493496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:42.493573 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:42.535705 1055021 cri.go:89] found id: ""
	I1208 01:58:42.535751 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.535760 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:42.535768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:42.535838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:42.565148 1055021 cri.go:89] found id: ""
	I1208 01:58:42.565174 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.565183 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:42.565189 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:42.565262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:42.592944 1055021 cri.go:89] found id: ""
	I1208 01:58:42.592967 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.592975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:42.592984 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:42.592995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.627360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:42.627389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:42.692577 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:42.692611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:42.710349 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:42.710378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:42.782051 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.782073 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:42.782085 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.310746 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.328999 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.329226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.355526 1055021 cri.go:89] found id: ""
	I1208 01:58:45.355554 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.355562 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.355569 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.355649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.385050 1055021 cri.go:89] found id: ""
	I1208 01:58:45.385073 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.385081 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.385087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.385146 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.409413 1055021 cri.go:89] found id: ""
	I1208 01:58:45.409438 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.409447 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.409452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.409510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:45.445870 1055021 cri.go:89] found id: ""
	I1208 01:58:45.445903 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.445912 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:45.445919 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:45.445988 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:45.473347 1055021 cri.go:89] found id: ""
	I1208 01:58:45.473382 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.473391 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:45.473397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:45.473465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:45.497721 1055021 cri.go:89] found id: ""
	I1208 01:58:45.497756 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.497765 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:45.497772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:45.497839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:45.529708 1055021 cri.go:89] found id: ""
	I1208 01:58:45.529739 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.529748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:45.529754 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:45.529829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:45.556748 1055021 cri.go:89] found id: ""
	I1208 01:58:45.556783 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.556792 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:45.556801 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:45.556812 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:45.623617 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:45.623652 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:45.642117 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:45.642151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:45.711093 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:45.711114 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:45.711127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.739133 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:45.739169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.836195 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:45.896793 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:45.896954 1055021 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:45.900444 1055021 out.go:179] * Enabled addons: 
	I1208 01:58:45.903391 1055021 addons.go:530] duration metric: took 1m57.256950319s for enable addons: enabled=[]
	I1208 01:58:48.271013 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.282344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.282467 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.314973 1055021 cri.go:89] found id: ""
	I1208 01:58:48.315046 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.315078 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.315098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.315204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.344987 1055021 cri.go:89] found id: ""
	I1208 01:58:48.345017 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.345026 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.345033 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.345094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.370650 1055021 cri.go:89] found id: ""
	I1208 01:58:48.370674 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.370681 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.370687 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.370749 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.396253 1055021 cri.go:89] found id: ""
	I1208 01:58:48.396319 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.396334 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.396341 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.396410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.425208 1055021 cri.go:89] found id: ""
	I1208 01:58:48.425235 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.425244 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.425250 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.425312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:48.455125 1055021 cri.go:89] found id: ""
	I1208 01:58:48.455150 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.455160 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:48.455177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:48.455238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:48.479964 1055021 cri.go:89] found id: ""
	I1208 01:58:48.480043 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.480059 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:48.480067 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:48.480128 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:48.506875 1055021 cri.go:89] found id: ""
	I1208 01:58:48.506902 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.506911 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:48.506920 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:48.506933 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:48.581685 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:48.581724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:48.600281 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:48.600313 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:48.663184 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:48.663203 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:48.663217 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:48.691509 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:48.691549 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.221462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.231909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.231985 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.262905 1055021 cri.go:89] found id: ""
	I1208 01:58:51.262932 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.262940 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.262946 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.263006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.293540 1055021 cri.go:89] found id: ""
	I1208 01:58:51.293567 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.293576 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.293582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.293639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.324201 1055021 cri.go:89] found id: ""
	I1208 01:58:51.324228 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.324236 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.324242 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.324298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.351933 1055021 cri.go:89] found id: ""
	I1208 01:58:51.351960 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.351974 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.351981 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.352040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.376814 1055021 cri.go:89] found id: ""
	I1208 01:58:51.376836 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.376845 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.376851 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.376909 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.401752 1055021 cri.go:89] found id: ""
	I1208 01:58:51.401776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.401785 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.401791 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.401848 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.430825 1055021 cri.go:89] found id: ""
	I1208 01:58:51.430861 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.430870 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.430876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.430938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.455641 1055021 cri.go:89] found id: ""
	I1208 01:58:51.455666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.455674 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.455684 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.455695 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:51.527696 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:51.527719 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:51.527732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:51.557037 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:51.557072 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.589759 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:51.589789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:51.655851 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.655888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:54.174903 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.185290 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.185363 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.213134 1055021 cri.go:89] found id: ""
	I1208 01:58:54.213158 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.213167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.213174 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.213234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.238420 1055021 cri.go:89] found id: ""
	I1208 01:58:54.238446 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.238455 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.238461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.238524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.272304 1055021 cri.go:89] found id: ""
	I1208 01:58:54.272331 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.272339 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.272345 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.272405 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.302582 1055021 cri.go:89] found id: ""
	I1208 01:58:54.302608 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.302617 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.302623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.302683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.331550 1055021 cri.go:89] found id: ""
	I1208 01:58:54.331577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.331585 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.331591 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.331656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.356262 1055021 cri.go:89] found id: ""
	I1208 01:58:54.356285 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.356293 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.356300 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.356364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.382019 1055021 cri.go:89] found id: ""
	I1208 01:58:54.382045 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.382054 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.382060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.382120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.407111 1055021 cri.go:89] found id: ""
	I1208 01:58:54.407136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.407145 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.407154 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.407169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.470487 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.470509 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:54.470522 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:54.498660 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:54.498697 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:54.539432 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:54.539462 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.617690 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:54.617725 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.135616 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.145801 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.145871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.170603 1055021 cri.go:89] found id: ""
	I1208 01:58:57.170629 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.170637 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.170643 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.170701 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.197272 1055021 cri.go:89] found id: ""
	I1208 01:58:57.197300 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.197309 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.197315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.197379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.226393 1055021 cri.go:89] found id: ""
	I1208 01:58:57.226420 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.226430 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.226436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.226499 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.267139 1055021 cri.go:89] found id: ""
	I1208 01:58:57.267215 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.267239 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.267257 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.267350 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.302475 1055021 cri.go:89] found id: ""
	I1208 01:58:57.302497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.302505 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.302511 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.302571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.335859 1055021 cri.go:89] found id: ""
	I1208 01:58:57.335886 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.335894 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.335901 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.335959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.360608 1055021 cri.go:89] found id: ""
	I1208 01:58:57.360630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.360639 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.360646 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.360706 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.386045 1055021 cri.go:89] found id: ""
	I1208 01:58:57.386067 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.386076 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.386084 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.386096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:57.454478 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.454515 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.472469 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.472503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.545965 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.545998 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:57.546011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:57.584922 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.584959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:00.114637 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.175958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.176042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.249754 1055021 cri.go:89] found id: ""
	I1208 01:59:00.249778 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.249788 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.249795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.249868 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.304452 1055021 cri.go:89] found id: ""
	I1208 01:59:00.304487 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.304497 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.304503 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.304576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.346364 1055021 cri.go:89] found id: ""
	I1208 01:59:00.346424 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.346434 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.346465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.346577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.377822 1055021 cri.go:89] found id: ""
	I1208 01:59:00.377852 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.377862 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.377868 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.377963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.406823 1055021 cri.go:89] found id: ""
	I1208 01:59:00.406875 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.406884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.406908 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.406992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.435875 1055021 cri.go:89] found id: ""
	I1208 01:59:00.435911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.435920 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.435942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.436025 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.463084 1055021 cri.go:89] found id: ""
	I1208 01:59:00.463117 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.463126 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.463135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.463243 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.489555 1055021 cri.go:89] found id: ""
	I1208 01:59:00.489589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.489598 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.489626 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.489645 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.562522 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.562560 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.582358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.582389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.649877 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.649899 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:00.649912 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:00.682085 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.682120 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.216065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.226430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.226503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.253068 1055021 cri.go:89] found id: ""
	I1208 01:59:03.253093 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.253102 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.253109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.253168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.282867 1055021 cri.go:89] found id: ""
	I1208 01:59:03.282894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.282903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.282910 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.282969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.320054 1055021 cri.go:89] found id: ""
	I1208 01:59:03.320080 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.320092 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.320098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.320155 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.347220 1055021 cri.go:89] found id: ""
	I1208 01:59:03.347243 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.347252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.347258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.347319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.373498 1055021 cri.go:89] found id: ""
	I1208 01:59:03.373570 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.373595 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.373613 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.373703 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.399912 1055021 cri.go:89] found id: ""
	I1208 01:59:03.399948 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.399957 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.399964 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.400023 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.425601 1055021 cri.go:89] found id: ""
	I1208 01:59:03.425625 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.425634 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.425640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.425698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.454732 1055021 cri.go:89] found id: ""
	I1208 01:59:03.454758 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.454767 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.454775 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.454789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.530461 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.530493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.549828 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.549917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.620701 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.620720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:03.620735 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:03.649018 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.649058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.177524 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.187461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.187531 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.214977 1055021 cri.go:89] found id: ""
	I1208 01:59:06.214999 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.215008 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.215015 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.215094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.238383 1055021 cri.go:89] found id: ""
	I1208 01:59:06.238493 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.238514 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.238534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.238619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.272265 1055021 cri.go:89] found id: ""
	I1208 01:59:06.272329 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.272351 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.272367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.272453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.302615 1055021 cri.go:89] found id: ""
	I1208 01:59:06.302658 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.302672 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.302678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.302750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.331427 1055021 cri.go:89] found id: ""
	I1208 01:59:06.331491 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.331512 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.331534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.331619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.356630 1055021 cri.go:89] found id: ""
	I1208 01:59:06.356711 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.356726 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.356734 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.356792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.382232 1055021 cri.go:89] found id: ""
	I1208 01:59:06.382265 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.382273 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.382279 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.382345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.409564 1055021 cri.go:89] found id: ""
	I1208 01:59:06.409598 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.409607 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.409616 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.409629 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.474483 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.474521 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:06.492236 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.492265 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.581040 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.581061 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:06.581074 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:06.609481 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.609528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:09.142358 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.152558 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.152645 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.176404 1055021 cri.go:89] found id: ""
	I1208 01:59:09.176469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.176483 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.176494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.176555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.200664 1055021 cri.go:89] found id: ""
	I1208 01:59:09.200687 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.200696 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.200702 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.200759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.227242 1055021 cri.go:89] found id: ""
	I1208 01:59:09.227266 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.227274 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.227280 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.227339 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.251746 1055021 cri.go:89] found id: ""
	I1208 01:59:09.251777 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.251786 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.251792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.251859 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.285331 1055021 cri.go:89] found id: ""
	I1208 01:59:09.285356 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.285365 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.285371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.285438 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.323377 1055021 cri.go:89] found id: ""
	I1208 01:59:09.323403 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.323411 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.323418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.323479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.348974 1055021 cri.go:89] found id: ""
	I1208 01:59:09.349042 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.349058 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.349065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.349127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.378922 1055021 cri.go:89] found id: ""
	I1208 01:59:09.378954 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.378962 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.378972 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.378983 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.444646 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.444685 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.462014 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.462050 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.537469 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.537502 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:09.537514 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:09.568427 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.568465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.103793 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.114409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.114485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.143200 1055021 cri.go:89] found id: ""
	I1208 01:59:12.143235 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.143245 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.143251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.143323 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.171946 1055021 cri.go:89] found id: ""
	I1208 01:59:12.171971 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.171979 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.171985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.172050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.196625 1055021 cri.go:89] found id: ""
	I1208 01:59:12.196651 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.196661 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.196669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.196775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.223108 1055021 cri.go:89] found id: ""
	I1208 01:59:12.223178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.223203 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.223221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.223315 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.253115 1055021 cri.go:89] found id: ""
	I1208 01:59:12.253141 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.253155 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.253173 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.253271 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.293405 1055021 cri.go:89] found id: ""
	I1208 01:59:12.293429 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.293438 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.293444 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.293512 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.323970 1055021 cri.go:89] found id: ""
	I1208 01:59:12.324002 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.324011 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.324017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.324087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.355979 1055021 cri.go:89] found id: ""
	I1208 01:59:12.356005 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.356013 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.356023 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.356035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.421458 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.421496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.440234 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.440269 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.509186 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.509214 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:12.509226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:12.541753 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.541790 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.078928 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.091792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.091882 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.118461 1055021 cri.go:89] found id: ""
	I1208 01:59:15.118482 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.118490 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.118496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.118561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.143588 1055021 cri.go:89] found id: ""
	I1208 01:59:15.143612 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.143621 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.143627 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.143687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.174121 1055021 cri.go:89] found id: ""
	I1208 01:59:15.174149 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.174158 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.174164 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.174281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.202466 1055021 cri.go:89] found id: ""
	I1208 01:59:15.202489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.202498 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.202504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.202563 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.229640 1055021 cri.go:89] found id: ""
	I1208 01:59:15.229663 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.229672 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.229678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.229737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.259982 1055021 cri.go:89] found id: ""
	I1208 01:59:15.260013 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.260021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.260027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.260085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.299510 1055021 cri.go:89] found id: ""
	I1208 01:59:15.299535 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.299544 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.299551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.299639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.327621 1055021 cri.go:89] found id: ""
	I1208 01:59:15.327655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.327664 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.327673 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.327684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:15.394588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.394632 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.412251 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.412283 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.478739 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.478760 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:15.478772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:15.507201 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.507279 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.049265 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.060577 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.060652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.087023 1055021 cri.go:89] found id: ""
	I1208 01:59:18.087050 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.087066 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.087073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.087132 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.115800 1055021 cri.go:89] found id: ""
	I1208 01:59:18.115826 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.115835 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.115841 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.115901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.145764 1055021 cri.go:89] found id: ""
	I1208 01:59:18.145787 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.145797 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.145803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.145862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.174947 1055021 cri.go:89] found id: ""
	I1208 01:59:18.174974 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.174983 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.174990 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.175050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.200824 1055021 cri.go:89] found id: ""
	I1208 01:59:18.200847 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.200857 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.200863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.200935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.229145 1055021 cri.go:89] found id: ""
	I1208 01:59:18.229168 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.229176 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.229185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.229246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.266059 1055021 cri.go:89] found id: ""
	I1208 01:59:18.266083 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.266092 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.266098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.266159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.293538 1055021 cri.go:89] found id: ""
	I1208 01:59:18.293605 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.293630 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.293657 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.293682 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.366543 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.366580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.387334 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.387367 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.457441 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:18.457480 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:18.457496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:18.486126 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.486159 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:21.020889 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.031877 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.031948 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.061454 1055021 cri.go:89] found id: ""
	I1208 01:59:21.061480 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.061489 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.061496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.061561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.086273 1055021 cri.go:89] found id: ""
	I1208 01:59:21.086300 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.086308 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.086315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.086373 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.112614 1055021 cri.go:89] found id: ""
	I1208 01:59:21.112637 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.112646 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.112652 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.112710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.142489 1055021 cri.go:89] found id: ""
	I1208 01:59:21.142511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.142521 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.142527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.142584 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.167579 1055021 cri.go:89] found id: ""
	I1208 01:59:21.167602 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.167618 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.167624 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.167683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.192114 1055021 cri.go:89] found id: ""
	I1208 01:59:21.192178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.192194 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.192202 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.192266 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.216638 1055021 cri.go:89] found id: ""
	I1208 01:59:21.216660 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.216669 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.216681 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.216739 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.241924 1055021 cri.go:89] found id: ""
	I1208 01:59:21.241956 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.241965 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.241989 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.242005 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.320443 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.320516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.339967 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.340098 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.405503 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.405526 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:21.405540 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:21.433479 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.433513 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:23.960720 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:23.971271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:23.971346 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:23.996003 1055021 cri.go:89] found id: ""
	I1208 01:59:23.996028 1055021 logs.go:282] 0 containers: []
	W1208 01:59:23.996037 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:23.996044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:23.996111 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.024119 1055021 cri.go:89] found id: ""
	I1208 01:59:24.024146 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.024154 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.024160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.024239 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.051095 1055021 cri.go:89] found id: ""
	I1208 01:59:24.051179 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.051202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.051217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.051298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.076451 1055021 cri.go:89] found id: ""
	I1208 01:59:24.076477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.076486 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.076493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.076577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.105499 1055021 cri.go:89] found id: ""
	I1208 01:59:24.105527 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.105537 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.105543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.105656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.136713 1055021 cri.go:89] found id: ""
	I1208 01:59:24.136736 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.136744 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.136751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.136836 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.165410 1055021 cri.go:89] found id: ""
	I1208 01:59:24.165442 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.165453 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.165460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.165541 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.194981 1055021 cri.go:89] found id: ""
	I1208 01:59:24.195018 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.195028 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.195037 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.195049 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.260506 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.260541 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.281317 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.281351 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.350532 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.350562 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:24.350574 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:24.378730 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.378760 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.906964 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.918049 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.918151 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:26.944808 1055021 cri.go:89] found id: ""
	I1208 01:59:26.944832 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.944840 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:26.944863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:26.944936 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:26.969519 1055021 cri.go:89] found id: ""
	I1208 01:59:26.969552 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.969561 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:26.969583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:26.969664 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:26.997687 1055021 cri.go:89] found id: ""
	I1208 01:59:26.997721 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.997730 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:26.997736 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:26.997835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.029005 1055021 cri.go:89] found id: ""
	I1208 01:59:27.029029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.029037 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.029044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.029121 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.052964 1055021 cri.go:89] found id: ""
	I1208 01:59:27.052989 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.053006 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.053027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.053114 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.081309 1055021 cri.go:89] found id: ""
	I1208 01:59:27.081342 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.081352 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.081375 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.081454 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.105197 1055021 cri.go:89] found id: ""
	I1208 01:59:27.105230 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.105239 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.105245 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.105311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.129963 1055021 cri.go:89] found id: ""
	I1208 01:59:27.129994 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.130003 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.130012 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:27.130023 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:27.157821 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.157853 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:27.187177 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.187201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.257425 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.257459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.284073 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.284112 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.365290 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:29.866080 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.876623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.876700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.905223 1055021 cri.go:89] found id: ""
	I1208 01:59:29.905247 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.905257 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.905264 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.905328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:29.935886 1055021 cri.go:89] found id: ""
	I1208 01:59:29.935911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.935920 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:29.935928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:29.935989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:29.961459 1055021 cri.go:89] found id: ""
	I1208 01:59:29.961489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.961499 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:29.961521 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:29.961588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:29.989601 1055021 cri.go:89] found id: ""
	I1208 01:59:29.989666 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.989691 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:29.989709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:29.989794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.034678 1055021 cri.go:89] found id: ""
	I1208 01:59:30.034757 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.034783 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.034802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.034922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.068355 1055021 cri.go:89] found id: ""
	I1208 01:59:30.068380 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.068388 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.068395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.068456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.095676 1055021 cri.go:89] found id: ""
	I1208 01:59:30.095706 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.095717 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.095723 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.095801 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.122432 1055021 cri.go:89] found id: ""
	I1208 01:59:30.122469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.122479 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.122504 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.122543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.191149 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:30.191170 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:30.191183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:30.220413 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.220447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.258205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.258234 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.330424 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.330461 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:32.850065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.861143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.861227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.885421 1055021 cri.go:89] found id: ""
	I1208 01:59:32.885447 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.885457 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.885463 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.885524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.911689 1055021 cri.go:89] found id: ""
	I1208 01:59:32.911716 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.911726 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.911732 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.911794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:32.941141 1055021 cri.go:89] found id: ""
	I1208 01:59:32.941166 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.941175 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:32.941182 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:32.941244 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:32.970750 1055021 cri.go:89] found id: ""
	I1208 01:59:32.970771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.970779 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:32.970786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:32.970883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:32.996768 1055021 cri.go:89] found id: ""
	I1208 01:59:32.996797 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.996806 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:32.996812 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:32.996887 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.025374 1055021 cri.go:89] found id: ""
	I1208 01:59:33.025410 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.025419 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.025448 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.025547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.051845 1055021 cri.go:89] found id: ""
	I1208 01:59:33.051878 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.051888 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.051895 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.051969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.078543 1055021 cri.go:89] found id: ""
	I1208 01:59:33.078566 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.078575 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.078584 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.078597 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.096489 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.096518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.168941 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.168962 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:33.168977 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:33.197574 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.197616 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:33.226563 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.226590 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:35.798966 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.810253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:35.810325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:35.835492 1055021 cri.go:89] found id: ""
	I1208 01:59:35.835516 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.835525 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:35.835534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:35.835593 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:35.861797 1055021 cri.go:89] found id: ""
	I1208 01:59:35.861823 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.861833 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:35.861839 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:35.861901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:35.887036 1055021 cri.go:89] found id: ""
	I1208 01:59:35.887073 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.887083 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:35.887090 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:35.887159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:35.915379 1055021 cri.go:89] found id: ""
	I1208 01:59:35.915456 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.915478 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:35.915493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:35.915566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:35.940687 1055021 cri.go:89] found id: ""
	I1208 01:59:35.940714 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.940724 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:35.940730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:35.940839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:35.967960 1055021 cri.go:89] found id: ""
	I1208 01:59:35.968038 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.968060 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:35.968074 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:35.968147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:35.993884 1055021 cri.go:89] found id: ""
	I1208 01:59:35.993927 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.993936 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:35.993942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:35.994012 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:36.027031 1055021 cri.go:89] found id: ""
	I1208 01:59:36.027056 1055021 logs.go:282] 0 containers: []
	W1208 01:59:36.027074 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:36.027084 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:36.027097 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:36.092294 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:36.092315 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:36.092330 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:36.120891 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:36.120927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:36.148475 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:36.148507 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:36.216306 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:36.216344 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:38.734253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:38.744803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:38.744884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:38.777276 1055021 cri.go:89] found id: ""
	I1208 01:59:38.777305 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.777314 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:38.777320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:38.777379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:38.815858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.815894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.815903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:38.815909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:38.815979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:38.845051 1055021 cri.go:89] found id: ""
	I1208 01:59:38.845084 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.845093 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:38.845098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:38.845164 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:38.870145 1055021 cri.go:89] found id: ""
	I1208 01:59:38.870178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.870187 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:38.870193 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:38.870261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:38.897461 1055021 cri.go:89] found id: ""
	I1208 01:59:38.897489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.897498 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:38.897505 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:38.897564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:38.923327 1055021 cri.go:89] found id: ""
	I1208 01:59:38.923351 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.923360 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:38.923367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:38.923430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:38.949858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.949884 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.949893 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:38.949899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:38.949963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:38.975805 1055021 cri.go:89] found id: ""
	I1208 01:59:38.975831 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.975840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:38.975849 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:38.975861 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:39.040102 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:39.040140 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:39.057980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:39.058045 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:39.129261 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:39.129281 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:39.129297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:39.157488 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:39.157524 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:41.687952 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:41.698803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:41.698906 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:41.724062 1055021 cri.go:89] found id: ""
	I1208 01:59:41.724139 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.724171 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:41.724184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:41.724260 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:41.756674 1055021 cri.go:89] found id: ""
	I1208 01:59:41.756712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.756720 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:41.756727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:41.756797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:41.793181 1055021 cri.go:89] found id: ""
	I1208 01:59:41.793208 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.793217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:41.793223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:41.793289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:41.823566 1055021 cri.go:89] found id: ""
	I1208 01:59:41.823589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.823597 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:41.823603 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:41.823660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:41.848188 1055021 cri.go:89] found id: ""
	I1208 01:59:41.848215 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.848224 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:41.848231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:41.848289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:41.874016 1055021 cri.go:89] found id: ""
	I1208 01:59:41.874053 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.874062 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:41.874068 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:41.874144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:41.901494 1055021 cri.go:89] found id: ""
	I1208 01:59:41.901517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.901525 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:41.901531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:41.901588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:41.927897 1055021 cri.go:89] found id: ""
	I1208 01:59:41.927919 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.927928 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:41.927936 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:41.927948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:41.989449 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:41.989523 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:41.989543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:42.035690 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:42.035724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:42.065962 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:42.066011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:42.136350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:42.136460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.657754 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:44.669949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:44.670036 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:44.700311 1055021 cri.go:89] found id: ""
	I1208 01:59:44.700341 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.700352 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:44.700358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:44.700422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:44.726358 1055021 cri.go:89] found id: ""
	I1208 01:59:44.726383 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.726392 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:44.726398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:44.726461 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:44.761403 1055021 cri.go:89] found id: ""
	I1208 01:59:44.761430 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.761440 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:44.761447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:44.761503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:44.792746 1055021 cri.go:89] found id: ""
	I1208 01:59:44.792771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.792780 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:44.792786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:44.792845 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:44.822139 1055021 cri.go:89] found id: ""
	I1208 01:59:44.822170 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.822179 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:44.822185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:44.822246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:44.848969 1055021 cri.go:89] found id: ""
	I1208 01:59:44.849036 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.849051 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:44.849060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:44.849123 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:44.877689 1055021 cri.go:89] found id: ""
	I1208 01:59:44.877712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.877720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:44.877727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:44.877792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:44.905370 1055021 cri.go:89] found id: ""
	I1208 01:59:44.905394 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.905403 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:44.905412 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:44.905424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.923373 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:44.923410 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:44.995648 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:44.995670 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:44.995684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:45.028693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:45.028744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:45.080489 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:45.080534 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:47.697315 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:47.707837 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:47.707910 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:47.731910 1055021 cri.go:89] found id: ""
	I1208 01:59:47.731934 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.731943 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:47.731950 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:47.732009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:47.765844 1055021 cri.go:89] found id: ""
	I1208 01:59:47.765869 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.765887 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:47.765894 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:47.765955 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:47.805305 1055021 cri.go:89] found id: ""
	I1208 01:59:47.805328 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.805342 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:47.805349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:47.805407 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:47.832547 1055021 cri.go:89] found id: ""
	I1208 01:59:47.832572 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.832581 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:47.832587 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:47.832646 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:47.857492 1055021 cri.go:89] found id: ""
	I1208 01:59:47.857517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.857526 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:47.857533 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:47.857595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:47.885564 1055021 cri.go:89] found id: ""
	I1208 01:59:47.885591 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.885599 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:47.885606 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:47.885668 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:47.914630 1055021 cri.go:89] found id: ""
	I1208 01:59:47.914655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.914664 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:47.914671 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:47.914737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:47.944185 1055021 cri.go:89] found id: ""
	I1208 01:59:47.944216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.944226 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:47.944236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:47.944247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:47.973585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:47.973622 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:48.011189 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:48.011218 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:48.078148 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:48.078187 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:48.098135 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:48.098167 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:48.174366 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:50.674625 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:50.685161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:50.685235 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:50.712131 1055021 cri.go:89] found id: ""
	I1208 01:59:50.712158 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.712167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:50.712175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:50.712236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:50.741188 1055021 cri.go:89] found id: ""
	I1208 01:59:50.741216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.741224 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:50.741231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:50.741325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:50.778993 1055021 cri.go:89] found id: ""
	I1208 01:59:50.779016 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.779026 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:50.779034 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:50.779103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:50.820444 1055021 cri.go:89] found id: ""
	I1208 01:59:50.820477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.820487 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:50.820494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:50.820552 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:50.845727 1055021 cri.go:89] found id: ""
	I1208 01:59:50.845752 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.845761 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:50.845768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:50.845833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:50.875375 1055021 cri.go:89] found id: ""
	I1208 01:59:50.875398 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.875406 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:50.875412 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:50.875472 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:50.899812 1055021 cri.go:89] found id: ""
	I1208 01:59:50.899836 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.899846 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:50.899852 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:50.899911 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:50.925692 1055021 cri.go:89] found id: ""
	I1208 01:59:50.925717 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.925725 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:50.925735 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:50.925751 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:50.991330 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:50.991366 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:51.010240 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:51.010276 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:51.075773 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:51.075801 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:51.075813 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:51.104705 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:51.104737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:53.634984 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:53.645378 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:53.645451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:53.676623 1055021 cri.go:89] found id: ""
	I1208 01:59:53.676647 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.676657 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:53.676664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:53.676723 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:53.700948 1055021 cri.go:89] found id: ""
	I1208 01:59:53.700973 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.700982 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:53.700988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:53.701047 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:53.725665 1055021 cri.go:89] found id: ""
	I1208 01:59:53.725689 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.725698 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:53.725704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:53.725760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:53.750770 1055021 cri.go:89] found id: ""
	I1208 01:59:53.750794 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.750803 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:53.750809 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:53.750885 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:53.784279 1055021 cri.go:89] found id: ""
	I1208 01:59:53.784304 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.784312 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:53.784319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:53.784378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:53.812355 1055021 cri.go:89] found id: ""
	I1208 01:59:53.812381 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.812390 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:53.812396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:53.812456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:53.837608 1055021 cri.go:89] found id: ""
	I1208 01:59:53.837634 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.837642 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:53.837648 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:53.837709 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:53.863046 1055021 cri.go:89] found id: ""
	I1208 01:59:53.863076 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.863085 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:53.863095 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:53.863136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:53.928268 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:53.928309 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:53.945830 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:53.945860 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:54.012382 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:54.012407 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:54.012447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:54.043446 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:54.043481 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:56.571785 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:56.582156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:56.582228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:56.611270 1055021 cri.go:89] found id: ""
	I1208 01:59:56.611292 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.611301 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:56.611307 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:56.611371 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:56.638765 1055021 cri.go:89] found id: ""
	I1208 01:59:56.638788 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.638797 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:56.638802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:56.638888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:56.663341 1055021 cri.go:89] found id: ""
	I1208 01:59:56.663368 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.663377 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:56.663383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:56.663495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:56.688606 1055021 cri.go:89] found id: ""
	I1208 01:59:56.688633 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.688643 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:56.688649 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:56.688730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:56.714263 1055021 cri.go:89] found id: ""
	I1208 01:59:56.714287 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.714296 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:56.714303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:56.714379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:56.738023 1055021 cri.go:89] found id: ""
	I1208 01:59:56.738047 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.738056 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:56.738062 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:56.738141 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:56.767926 1055021 cri.go:89] found id: ""
	I1208 01:59:56.767951 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.767960 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:56.767966 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:56.768071 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:56.801241 1055021 cri.go:89] found id: ""
	I1208 01:59:56.801268 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.801277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:56.801286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:56.801317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:56.873621 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:56.873657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:56.891086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:56.891116 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:56.956286 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:56.956306 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:56.956319 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:56.991921 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:56.991965 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.538010 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:59.548530 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:59.548598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:59.574677 1055021 cri.go:89] found id: ""
	I1208 01:59:59.574701 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.574709 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:59.574716 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:59.574779 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:59.600311 1055021 cri.go:89] found id: ""
	I1208 01:59:59.600337 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.600346 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:59.600352 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:59.600410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:59.627833 1055021 cri.go:89] found id: ""
	I1208 01:59:59.627858 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.627867 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:59.627873 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:59.627946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:59.652005 1055021 cri.go:89] found id: ""
	I1208 01:59:59.652029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.652038 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:59.652044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:59.652138 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:59.676487 1055021 cri.go:89] found id: ""
	I1208 01:59:59.676511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.676519 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:59.676525 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:59.676581 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:59.701988 1055021 cri.go:89] found id: ""
	I1208 01:59:59.702012 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.702020 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:59.702027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:59.702085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:59.726000 1055021 cri.go:89] found id: ""
	I1208 01:59:59.726025 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.726034 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:59.726040 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:59.726100 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:59.751097 1055021 cri.go:89] found id: ""
	I1208 01:59:59.751123 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.751131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:59.751141 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:59.751154 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:59.832931 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:59.832954 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:59.832966 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:59.862055 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:59.862089 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.890385 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:59.890414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:59.959793 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:59.959825 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.477852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:02.489201 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:02.489312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:02.516698 1055021 cri.go:89] found id: ""
	I1208 02:00:02.516725 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.516734 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:02.516741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:02.516825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:02.545938 1055021 cri.go:89] found id: ""
	I1208 02:00:02.545965 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.545974 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:02.545980 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:02.546051 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:02.574765 1055021 cri.go:89] found id: ""
	I1208 02:00:02.574799 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.574808 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:02.574815 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:02.574920 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:02.600958 1055021 cri.go:89] found id: ""
	I1208 02:00:02.600984 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.600992 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:02.601001 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:02.601061 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:02.627836 1055021 cri.go:89] found id: ""
	I1208 02:00:02.627862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.627872 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:02.627879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:02.627942 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:02.654803 1055021 cri.go:89] found id: ""
	I1208 02:00:02.654831 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.654864 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:02.654872 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:02.654938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:02.682455 1055021 cri.go:89] found id: ""
	I1208 02:00:02.682487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.682503 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:02.682510 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:02.682577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:02.709680 1055021 cri.go:89] found id: ""
	I1208 02:00:02.709709 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.709718 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:02.709728 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:02.709741 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:02.776682 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:02.776761 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.795697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:02.795794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:02.873752 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:02.873773 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:02.873787 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:02.903468 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:02.903511 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.438786 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:05.449615 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:05.449691 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:05.475122 1055021 cri.go:89] found id: ""
	I1208 02:00:05.475147 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.475156 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:05.475162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:05.475223 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:05.500749 1055021 cri.go:89] found id: ""
	I1208 02:00:05.500772 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.500781 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:05.500788 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:05.500854 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:05.526357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.526435 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.526456 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:05.526475 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:05.526564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:05.553466 1055021 cri.go:89] found id: ""
	I1208 02:00:05.553493 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.553502 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:05.553509 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:05.553570 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:05.583119 1055021 cri.go:89] found id: ""
	I1208 02:00:05.583145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.583154 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:05.583161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:05.583229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:05.613357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.613385 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.613394 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:05.613401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:05.613465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:05.639303 1055021 cri.go:89] found id: ""
	I1208 02:00:05.639328 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.639337 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:05.639358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:05.639422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:05.666333 1055021 cri.go:89] found id: ""
	I1208 02:00:05.666372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.666382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:05.666392 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:05.666405 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.696869 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:05.696901 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:05.762499 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:05.762536 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:05.780857 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:05.780889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:05.848522 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:05.848585 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:05.848598 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.377424 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:08.388192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:08.388265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:08.414029 1055021 cri.go:89] found id: ""
	I1208 02:00:08.414050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.414059 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:08.414065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:08.414127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:08.441760 1055021 cri.go:89] found id: ""
	I1208 02:00:08.441782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.441790 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:08.441796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:08.441857 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:08.466751 1055021 cri.go:89] found id: ""
	I1208 02:00:08.466774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.466783 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:08.466789 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:08.466870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:08.493249 1055021 cri.go:89] found id: ""
	I1208 02:00:08.493272 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.493280 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:08.493287 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:08.493345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:08.519677 1055021 cri.go:89] found id: ""
	I1208 02:00:08.519707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.519716 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:08.519722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:08.519788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:08.545435 1055021 cri.go:89] found id: ""
	I1208 02:00:08.545460 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.545469 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:08.545476 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:08.545538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:08.576588 1055021 cri.go:89] found id: ""
	I1208 02:00:08.576612 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.576621 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:08.576628 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:08.576719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:08.602665 1055021 cri.go:89] found id: ""
	I1208 02:00:08.602689 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.602697 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:08.602706 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:08.602737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:08.668015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:08.668065 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:08.685174 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:08.685203 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:08.750092 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:08.750113 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:08.750127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.781244 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:08.781278 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.323549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:11.333988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:11.334059 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:11.359294 1055021 cri.go:89] found id: ""
	I1208 02:00:11.359316 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.359325 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:11.359331 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:11.359391 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:11.385252 1055021 cri.go:89] found id: ""
	I1208 02:00:11.385274 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.385283 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:11.385289 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:11.385354 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:11.411462 1055021 cri.go:89] found id: ""
	I1208 02:00:11.411485 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.411494 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:11.411501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:11.411560 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:11.437020 1055021 cri.go:89] found id: ""
	I1208 02:00:11.437043 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.437052 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:11.437059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:11.437142 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:11.462749 1055021 cri.go:89] found id: ""
	I1208 02:00:11.462774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.462788 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:11.462795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:11.462912 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:11.487618 1055021 cri.go:89] found id: ""
	I1208 02:00:11.487642 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.487650 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:11.487656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:11.487738 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:11.517338 1055021 cri.go:89] found id: ""
	I1208 02:00:11.517411 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.517435 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:11.517454 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:11.517582 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:11.543576 1055021 cri.go:89] found id: ""
	I1208 02:00:11.543608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.543618 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:11.543670 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:11.543687 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:11.605714 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:11.605738 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:11.605754 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:11.634573 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:11.634608 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.663270 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:11.663297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:11.728036 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:11.728073 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.245900 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:14.259346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:14.259447 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:14.292891 1055021 cri.go:89] found id: ""
	I1208 02:00:14.292913 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.292922 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:14.292928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:14.292995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:14.326384 1055021 cri.go:89] found id: ""
	I1208 02:00:14.326408 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.326418 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:14.326425 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:14.326485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:14.354623 1055021 cri.go:89] found id: ""
	I1208 02:00:14.354646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.354654 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:14.354660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:14.354719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:14.382160 1055021 cri.go:89] found id: ""
	I1208 02:00:14.382187 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.382196 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:14.382203 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:14.382261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:14.408072 1055021 cri.go:89] found id: ""
	I1208 02:00:14.408141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.408166 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:14.408184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:14.408273 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:14.433739 1055021 cri.go:89] found id: ""
	I1208 02:00:14.433767 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.433776 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:14.433783 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:14.433889 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:14.460882 1055021 cri.go:89] found id: ""
	I1208 02:00:14.460906 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.460914 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:14.460921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:14.461002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:14.486630 1055021 cri.go:89] found id: ""
	I1208 02:00:14.486707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.486732 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:14.486755 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:14.486781 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:14.552732 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:14.552769 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.570940 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:14.570975 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:14.636277 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:14.636301 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:14.636317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:14.664410 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:14.664447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:17.192894 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:17.203129 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:17.203200 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:17.228497 1055021 cri.go:89] found id: ""
	I1208 02:00:17.228519 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.228528 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:17.228534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:17.228598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:17.253841 1055021 cri.go:89] found id: ""
	I1208 02:00:17.253862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.253871 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:17.253887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:17.253945 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:17.284067 1055021 cri.go:89] found id: ""
	I1208 02:00:17.284088 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.284097 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:17.284103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:17.284162 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:17.320641 1055021 cri.go:89] found id: ""
	I1208 02:00:17.320668 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.320678 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:17.320684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:17.320748 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:17.347071 1055021 cri.go:89] found id: ""
	I1208 02:00:17.347094 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.347103 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:17.347109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:17.347227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:17.373328 1055021 cri.go:89] found id: ""
	I1208 02:00:17.373357 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.373366 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:17.373372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:17.373439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:17.400408 1055021 cri.go:89] found id: ""
	I1208 02:00:17.400437 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.400446 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:17.400456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:17.400515 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:17.426232 1055021 cri.go:89] found id: ""
	I1208 02:00:17.426268 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.426277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:17.426286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:17.426298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:17.491052 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:17.491092 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:17.509546 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:17.509575 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:17.578008 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:17.578068 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:17.578090 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:17.606330 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:17.606368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:20.139003 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:20.149823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:20.149894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:20.176541 1055021 cri.go:89] found id: ""
	I1208 02:00:20.176568 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.176577 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:20.176583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:20.176647 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:20.209117 1055021 cri.go:89] found id: ""
	I1208 02:00:20.209141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.209149 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:20.209156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:20.209222 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:20.235819 1055021 cri.go:89] found id: ""
	I1208 02:00:20.235846 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.235861 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:20.235867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:20.235933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:20.268968 1055021 cri.go:89] found id: ""
	I1208 02:00:20.268997 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.269006 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:20.269019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:20.269079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:20.302684 1055021 cri.go:89] found id: ""
	I1208 02:00:20.302712 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.302721 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:20.302728 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:20.302814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:20.330459 1055021 cri.go:89] found id: ""
	I1208 02:00:20.330535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.330550 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:20.330557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:20.330632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:20.358743 1055021 cri.go:89] found id: ""
	I1208 02:00:20.358778 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.358787 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:20.358793 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:20.358881 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:20.384853 1055021 cri.go:89] found id: ""
	I1208 02:00:20.384883 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.384892 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:20.384909 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:20.384921 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:20.450466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:20.450505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:20.468842 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:20.468872 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:20.533689 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:20.533717 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:20.533732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:20.561211 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:20.561245 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.093217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:23.103855 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:23.103935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:23.129008 1055021 cri.go:89] found id: ""
	I1208 02:00:23.129084 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.129113 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:23.129122 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:23.129192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:23.154045 1055021 cri.go:89] found id: ""
	I1208 02:00:23.154071 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.154079 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:23.154086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:23.154144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:23.179982 1055021 cri.go:89] found id: ""
	I1208 02:00:23.180009 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.180018 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:23.180025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:23.180085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:23.205725 1055021 cri.go:89] found id: ""
	I1208 02:00:23.205751 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.205760 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:23.205767 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:23.205825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:23.233180 1055021 cri.go:89] found id: ""
	I1208 02:00:23.233206 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.233214 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:23.233221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:23.233280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:23.260814 1055021 cri.go:89] found id: ""
	I1208 02:00:23.260841 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.260850 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:23.260856 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:23.260915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:23.289337 1055021 cri.go:89] found id: ""
	I1208 02:00:23.289369 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.289379 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:23.289384 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:23.289451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:23.326356 1055021 cri.go:89] found id: ""
	I1208 02:00:23.326383 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.326392 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:23.326401 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:23.326414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:23.344175 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:23.344207 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:23.409693 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:23.409767 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:23.409793 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:23.437814 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:23.437848 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.472006 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:23.472034 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.036954 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:26.050218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:26.050295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:26.084077 1055021 cri.go:89] found id: ""
	I1208 02:00:26.084101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.084110 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:26.084117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:26.084179 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:26.115433 1055021 cri.go:89] found id: ""
	I1208 02:00:26.115458 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.115467 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:26.115473 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:26.115548 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:26.142798 1055021 cri.go:89] found id: ""
	I1208 02:00:26.142821 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.142829 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:26.142836 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:26.142923 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:26.169427 1055021 cri.go:89] found id: ""
	I1208 02:00:26.169449 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.169457 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:26.169465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:26.169523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:26.196837 1055021 cri.go:89] found id: ""
	I1208 02:00:26.196863 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.196873 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:26.196879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:26.196940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:26.222671 1055021 cri.go:89] found id: ""
	I1208 02:00:26.222694 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.222702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:26.222709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:26.222770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:26.258674 1055021 cri.go:89] found id: ""
	I1208 02:00:26.258696 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.258705 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:26.258711 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:26.258769 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:26.297463 1055021 cri.go:89] found id: ""
	I1208 02:00:26.297486 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.297496 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:26.297505 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:26.297520 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:26.329140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:26.329223 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:26.359625 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:26.359657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.424937 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:26.424974 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:26.443260 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:26.443293 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:26.509592 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.010492 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:29.023086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:29.023160 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:29.051358 1055021 cri.go:89] found id: ""
	I1208 02:00:29.051380 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.051389 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:29.051395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:29.051456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:29.085536 1055021 cri.go:89] found id: ""
	I1208 02:00:29.085566 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.085575 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:29.085583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:29.085649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:29.114380 1055021 cri.go:89] found id: ""
	I1208 02:00:29.114407 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.114416 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:29.114422 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:29.114483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:29.139608 1055021 cri.go:89] found id: ""
	I1208 02:00:29.139697 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.139713 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:29.139722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:29.139800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:29.167030 1055021 cri.go:89] found id: ""
	I1208 02:00:29.167055 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.167100 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:29.167107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:29.167173 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:29.191898 1055021 cri.go:89] found id: ""
	I1208 02:00:29.191920 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.191929 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:29.191935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:29.191992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:29.216839 1055021 cri.go:89] found id: ""
	I1208 02:00:29.216870 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.216879 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:29.216889 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:29.216975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:29.246347 1055021 cri.go:89] found id: ""
	I1208 02:00:29.246372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.246382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:29.246391 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:29.246421 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:29.266473 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:29.266509 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:29.345611 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.345636 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:29.345648 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:29.375020 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:29.375060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:29.402360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:29.402386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:31.967515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:31.978076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:31.978147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:32.018381 1055021 cri.go:89] found id: ""
	I1208 02:00:32.018457 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.018480 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:32.018500 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:32.018611 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:32.054678 1055021 cri.go:89] found id: ""
	I1208 02:00:32.054700 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.054709 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:32.054715 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:32.054775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:32.085659 1055021 cri.go:89] found id: ""
	I1208 02:00:32.085686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.085695 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:32.085701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:32.085809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:32.112827 1055021 cri.go:89] found id: ""
	I1208 02:00:32.112892 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.112907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:32.112914 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:32.112973 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:32.141486 1055021 cri.go:89] found id: ""
	I1208 02:00:32.141513 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.141521 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:32.141527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:32.141591 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:32.166463 1055021 cri.go:89] found id: ""
	I1208 02:00:32.166489 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.166498 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:32.166504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:32.166566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:32.196018 1055021 cri.go:89] found id: ""
	I1208 02:00:32.196086 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.196111 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:32.196125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:32.196198 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:32.219763 1055021 cri.go:89] found id: ""
	I1208 02:00:32.219802 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.219812 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:32.219821 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:32.219834 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:32.237401 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:32.237431 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:32.335697 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:32.335720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:32.335732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:32.364998 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:32.365043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:32.394072 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:32.394099 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:34.958230 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:34.968535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:34.968606 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:34.993490 1055021 cri.go:89] found id: ""
	I1208 02:00:34.993515 1055021 logs.go:282] 0 containers: []
	W1208 02:00:34.993524 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:34.993531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:34.993588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:35.026482 1055021 cri.go:89] found id: ""
	I1208 02:00:35.026511 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.026521 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:35.026529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:35.026595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:35.062109 1055021 cri.go:89] found id: ""
	I1208 02:00:35.062138 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.062147 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:35.062154 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:35.062218 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:35.094672 1055021 cri.go:89] found id: ""
	I1208 02:00:35.094706 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.094715 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:35.094722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:35.094784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:35.120981 1055021 cri.go:89] found id: ""
	I1208 02:00:35.121007 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.121016 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:35.121022 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:35.121087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:35.147283 1055021 cri.go:89] found id: ""
	I1208 02:00:35.147310 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.147321 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:35.147329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:35.147392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:35.174946 1055021 cri.go:89] found id: ""
	I1208 02:00:35.175038 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.175075 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:35.175115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:35.175224 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:35.205558 1055021 cri.go:89] found id: ""
	I1208 02:00:35.205583 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.205592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:35.205601 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:35.205636 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:35.273454 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:35.273537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:35.294102 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:35.294182 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:35.363206 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:35.363227 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:35.363240 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:35.391418 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:35.391457 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:37.922946 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:37.933320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:37.933392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:37.959213 1055021 cri.go:89] found id: ""
	I1208 02:00:37.959237 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.959247 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:37.959253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:37.959311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:37.983822 1055021 cri.go:89] found id: ""
	I1208 02:00:37.983844 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.983853 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:37.983859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:37.983917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:38.015881 1055021 cri.go:89] found id: ""
	I1208 02:00:38.015909 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.015919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:38.015927 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:38.015994 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:38.047948 1055021 cri.go:89] found id: ""
	I1208 02:00:38.047971 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.047979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:38.047985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:38.048049 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:38.098187 1055021 cri.go:89] found id: ""
	I1208 02:00:38.098216 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.098227 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:38.098234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:38.098298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:38.122930 1055021 cri.go:89] found id: ""
	I1208 02:00:38.122952 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.122960 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:38.122967 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:38.123028 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:38.148405 1055021 cri.go:89] found id: ""
	I1208 02:00:38.148439 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.148449 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:38.148455 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:38.148513 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:38.174446 1055021 cri.go:89] found id: ""
	I1208 02:00:38.174522 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.174544 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:38.174565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:38.174602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:38.239470 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:38.239505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:38.257924 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:38.258079 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:38.328235 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:38.328302 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:38.328321 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:38.356585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:38.356619 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:40.887527 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:40.897939 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:40.898011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:40.922663 1055021 cri.go:89] found id: ""
	I1208 02:00:40.922686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.922695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:40.922701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:40.922760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:40.947304 1055021 cri.go:89] found id: ""
	I1208 02:00:40.947371 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.947397 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:40.947409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:40.947484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:40.973263 1055021 cri.go:89] found id: ""
	I1208 02:00:40.973290 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.973299 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:40.973305 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:40.973365 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:40.998615 1055021 cri.go:89] found id: ""
	I1208 02:00:40.998648 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.998658 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:40.998665 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:40.998735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:41.034153 1055021 cri.go:89] found id: ""
	I1208 02:00:41.034180 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.034190 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:41.034196 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:41.034255 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:41.063886 1055021 cri.go:89] found id: ""
	I1208 02:00:41.063916 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.063925 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:41.063931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:41.063993 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:41.090937 1055021 cri.go:89] found id: ""
	I1208 02:00:41.090966 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.090976 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:41.090982 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:41.091046 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:41.117814 1055021 cri.go:89] found id: ""
	I1208 02:00:41.117839 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.117849 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:41.117858 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:41.117870 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:41.182312 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:41.182348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:41.200044 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:41.200071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:41.273066 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:41.273095 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:41.273108 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:41.308256 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:41.308298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:43.843380 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:43.854135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:43.854204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:43.879332 1055021 cri.go:89] found id: ""
	I1208 02:00:43.879356 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.879365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:43.879371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:43.879431 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:43.903897 1055021 cri.go:89] found id: ""
	I1208 02:00:43.903921 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.903930 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:43.903935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:43.904010 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:43.928349 1055021 cri.go:89] found id: ""
	I1208 02:00:43.928377 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.928386 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:43.928396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:43.928453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:43.957013 1055021 cri.go:89] found id: ""
	I1208 02:00:43.957046 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.957060 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:43.957066 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:43.957137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:43.981711 1055021 cri.go:89] found id: ""
	I1208 02:00:43.981784 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.981819 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:43.981843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:43.981933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:44.021808 1055021 cri.go:89] found id: ""
	I1208 02:00:44.021842 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.021851 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:44.021859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:44.021940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:44.053536 1055021 cri.go:89] found id: ""
	I1208 02:00:44.053608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.053631 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:44.053650 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:44.053735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:44.087893 1055021 cri.go:89] found id: ""
	I1208 02:00:44.087958 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.087975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:44.087985 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:44.087997 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:44.153453 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:44.153493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:44.172720 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:44.172750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:44.242553 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:44.242575 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:44.242587 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:44.273804 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:44.273889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:46.805601 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:46.815929 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:46.815999 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:46.840623 1055021 cri.go:89] found id: ""
	I1208 02:00:46.840646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.840655 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:46.840661 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:46.840721 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:46.866056 1055021 cri.go:89] found id: ""
	I1208 02:00:46.866082 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.866090 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:46.866096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:46.866156 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:46.890598 1055021 cri.go:89] found id: ""
	I1208 02:00:46.890623 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.890632 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:46.890638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:46.890699 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:46.917031 1055021 cri.go:89] found id: ""
	I1208 02:00:46.917101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.917125 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:46.917142 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:46.917230 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:46.941427 1055021 cri.go:89] found id: ""
	I1208 02:00:46.941450 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.941459 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:46.941465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:46.941524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:46.971991 1055021 cri.go:89] found id: ""
	I1208 02:00:46.972015 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.972024 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:46.972031 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:46.972087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:47.000365 1055021 cri.go:89] found id: ""
	I1208 02:00:47.000393 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.000402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:47.000409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:47.000500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:47.039853 1055021 cri.go:89] found id: ""
	I1208 02:00:47.039934 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.039968 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:47.040014 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:47.040070 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:47.124159 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:47.124199 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:47.142393 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:47.142436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:47.204667 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:47.204688 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:47.204700 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:47.233531 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:47.233572 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:49.777314 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:49.787953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:49.788027 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:49.814344 1055021 cri.go:89] found id: ""
	I1208 02:00:49.814368 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.814376 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:49.814383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:49.814443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:49.843148 1055021 cri.go:89] found id: ""
	I1208 02:00:49.843172 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.843180 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:49.843187 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:49.843245 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:49.868221 1055021 cri.go:89] found id: ""
	I1208 02:00:49.868245 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.868253 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:49.868260 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:49.868319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:49.892756 1055021 cri.go:89] found id: ""
	I1208 02:00:49.892782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.892792 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:49.892799 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:49.892879 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:49.921697 1055021 cri.go:89] found id: ""
	I1208 02:00:49.921730 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.921738 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:49.921745 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:49.921818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:49.946935 1055021 cri.go:89] found id: ""
	I1208 02:00:49.947000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.947018 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:49.947025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:49.947102 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:49.972386 1055021 cri.go:89] found id: ""
	I1208 02:00:49.972410 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.972418 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:49.972427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:49.972485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:49.997299 1055021 cri.go:89] found id: ""
	I1208 02:00:49.997324 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.997332 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:49.997342 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:49.997354 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:50.024427 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:50.024465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:50.106428 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:50.106452 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:50.106466 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:50.134825 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:50.134944 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:50.164257 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:50.164286 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:52.731852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:52.743466 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:52.743547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:52.770730 1055021 cri.go:89] found id: ""
	I1208 02:00:52.770754 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.770763 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:52.770769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:52.770837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:52.795524 1055021 cri.go:89] found id: ""
	I1208 02:00:52.795547 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.795555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:52.795562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:52.795622 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:52.820947 1055021 cri.go:89] found id: ""
	I1208 02:00:52.820976 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.820986 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:52.820993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:52.821054 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:52.846461 1055021 cri.go:89] found id: ""
	I1208 02:00:52.846487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.846495 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:52.846502 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:52.846614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:52.876556 1055021 cri.go:89] found id: ""
	I1208 02:00:52.876582 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.876591 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:52.876598 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:52.876658 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:52.902890 1055021 cri.go:89] found id: ""
	I1208 02:00:52.902915 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.902924 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:52.902931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:52.902995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:52.927861 1055021 cri.go:89] found id: ""
	I1208 02:00:52.927936 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.927952 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:52.927960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:52.928018 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:52.952070 1055021 cri.go:89] found id: ""
	I1208 02:00:52.952093 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.952102 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:52.952111 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:52.952123 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:52.969988 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:52.970071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:53.047400 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:53.047420 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:53.047432 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:53.079007 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:53.079096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:53.110493 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:53.110518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:55.678655 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:55.689237 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:55.689308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:55.716663 1055021 cri.go:89] found id: ""
	I1208 02:00:55.716685 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.716694 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:55.716700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:55.716767 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:55.742016 1055021 cri.go:89] found id: ""
	I1208 02:00:55.742042 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.742051 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:55.742057 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:55.742117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:55.771093 1055021 cri.go:89] found id: ""
	I1208 02:00:55.771116 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.771125 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:55.771131 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:55.771192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:55.795221 1055021 cri.go:89] found id: ""
	I1208 02:00:55.795243 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.795252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:55.795258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:55.795321 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:55.824380 1055021 cri.go:89] found id: ""
	I1208 02:00:55.824402 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.824411 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:55.824417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:55.824482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:55.853339 1055021 cri.go:89] found id: ""
	I1208 02:00:55.853362 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.853370 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:55.853376 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:55.853439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:55.879120 1055021 cri.go:89] found id: ""
	I1208 02:00:55.879145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.879154 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:55.879160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:55.879229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:55.904782 1055021 cri.go:89] found id: ""
	I1208 02:00:55.904811 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.904820 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:55.904829 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:55.904840 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:55.936603 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:55.936627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:56.002394 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:56.002436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:56.025805 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:56.025962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:56.100621 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:56.100643 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:56.100655 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:58.632608 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:58.643205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:58.643281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:58.668717 1055021 cri.go:89] found id: ""
	I1208 02:00:58.668741 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.668750 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:58.668756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:58.668818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:58.693510 1055021 cri.go:89] found id: ""
	I1208 02:00:58.693535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.693543 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:58.693550 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:58.693614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:58.718959 1055021 cri.go:89] found id: ""
	I1208 02:00:58.719050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.719071 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:58.719079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:58.719153 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:58.743668 1055021 cri.go:89] found id: ""
	I1208 02:00:58.743691 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.743700 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:58.743707 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:58.743765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:58.772612 1055021 cri.go:89] found id: ""
	I1208 02:00:58.772679 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.772700 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:58.772718 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:58.772809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:58.798178 1055021 cri.go:89] found id: ""
	I1208 02:00:58.798204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.798212 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:58.798218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:58.798278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:58.822926 1055021 cri.go:89] found id: ""
	I1208 02:00:58.823000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.823018 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:58.823026 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:58.823097 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:58.849170 1055021 cri.go:89] found id: ""
	I1208 02:00:58.849204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.849214 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:58.849249 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:58.849273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:58.916845 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:58.916884 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:58.934980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:58.935008 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:59.004330 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:59.004355 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:59.004368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:59.034521 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:59.034558 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.569349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:01.581275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:01.581356 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:01.614013 1055021 cri.go:89] found id: ""
	I1208 02:01:01.614040 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.614052 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:01.614059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:01.614120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:01.642283 1055021 cri.go:89] found id: ""
	I1208 02:01:01.642311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.642321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:01.642327 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:01.642388 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:01.668888 1055021 cri.go:89] found id: ""
	I1208 02:01:01.668916 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.668927 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:01.668933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:01.669045 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:01.696848 1055021 cri.go:89] found id: ""
	I1208 02:01:01.696890 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.696917 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:01.696924 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:01.697002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:01.724280 1055021 cri.go:89] found id: ""
	I1208 02:01:01.724314 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.724323 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:01.724329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:01.724397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:01.757961 1055021 cri.go:89] found id: ""
	I1208 02:01:01.757993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.758002 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:01.758009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:01.758076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:01.791626 1055021 cri.go:89] found id: ""
	I1208 02:01:01.791652 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.791663 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:01.791669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:01.791734 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:01.824543 1055021 cri.go:89] found id: ""
	I1208 02:01:01.824614 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.824631 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:01.824643 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:01.824656 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.858339 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:01.858368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:01.923001 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:01.923043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:01.942107 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:01.942139 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:02.016342 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:02.016379 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:02.016393 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.550723 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:04.561389 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:04.561458 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:04.587293 1055021 cri.go:89] found id: ""
	I1208 02:01:04.587319 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.587329 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:04.587335 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:04.587398 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:04.612287 1055021 cri.go:89] found id: ""
	I1208 02:01:04.612313 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.612321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:04.612328 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:04.612389 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:04.637981 1055021 cri.go:89] found id: ""
	I1208 02:01:04.638006 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.638016 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:04.638023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:04.638083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:04.666122 1055021 cri.go:89] found id: ""
	I1208 02:01:04.666150 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.666159 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:04.666166 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:04.666228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:04.691775 1055021 cri.go:89] found id: ""
	I1208 02:01:04.691799 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.691807 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:04.691813 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:04.691877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:04.716584 1055021 cri.go:89] found id: ""
	I1208 02:01:04.716610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.716619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:04.716626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:04.716684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:04.741247 1055021 cri.go:89] found id: ""
	I1208 02:01:04.741284 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.741297 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:04.741303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:04.741394 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:04.777041 1055021 cri.go:89] found id: ""
	I1208 02:01:04.777070 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.777079 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:04.777088 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:04.777100 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:04.797448 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:04.797478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:04.865442 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:04.865465 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:04.865478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.893232 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:04.893270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:04.921152 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:04.921183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.486177 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:07.496522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:07.496608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:07.521126 1055021 cri.go:89] found id: ""
	I1208 02:01:07.521202 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.521226 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:07.521244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:07.521333 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:07.549393 1055021 cri.go:89] found id: ""
	I1208 02:01:07.549458 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.549483 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:07.549501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:07.549585 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:07.575624 1055021 cri.go:89] found id: ""
	I1208 02:01:07.575699 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.575715 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:07.575722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:07.575784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:07.604231 1055021 cri.go:89] found id: ""
	I1208 02:01:07.604296 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.604310 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:07.604317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:07.604377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:07.629146 1055021 cri.go:89] found id: ""
	I1208 02:01:07.629177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.629186 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:07.629192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:07.629267 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:07.654573 1055021 cri.go:89] found id: ""
	I1208 02:01:07.654598 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.654607 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:07.654614 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:07.654682 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:07.679672 1055021 cri.go:89] found id: ""
	I1208 02:01:07.679746 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.679762 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:07.679769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:07.679841 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:07.705327 1055021 cri.go:89] found id: ""
	I1208 02:01:07.705353 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.705362 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:07.705371 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:07.705386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.770583 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:07.770665 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:07.788444 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:07.788473 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:07.862214 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:07.862236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:07.862248 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:07.891006 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:07.891043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.422919 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:10.433424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:10.433496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:10.458269 1055021 cri.go:89] found id: ""
	I1208 02:01:10.458295 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.458303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:10.458319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:10.458397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:10.485114 1055021 cri.go:89] found id: ""
	I1208 02:01:10.485138 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.485146 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:10.485152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:10.485211 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:10.512785 1055021 cri.go:89] found id: ""
	I1208 02:01:10.512808 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.512817 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:10.512823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:10.512884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:10.538032 1055021 cri.go:89] found id: ""
	I1208 02:01:10.538057 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.538066 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:10.538072 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:10.538130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:10.568288 1055021 cri.go:89] found id: ""
	I1208 02:01:10.568311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.568364 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:10.568379 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:10.568445 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:10.593987 1055021 cri.go:89] found id: ""
	I1208 02:01:10.594012 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.594021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:10.594028 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:10.594087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:10.619212 1055021 cri.go:89] found id: ""
	I1208 02:01:10.619237 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.619245 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:10.619251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:10.619311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:10.645349 1055021 cri.go:89] found id: ""
	I1208 02:01:10.645384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.645393 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:10.645402 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:10.645414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:10.707691 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:10.707713 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:10.707726 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:10.735113 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:10.735148 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.768113 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:10.768142 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:10.843634 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:10.843672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.362994 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:13.373991 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:13.374082 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:13.400090 1055021 cri.go:89] found id: ""
	I1208 02:01:13.400127 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.400136 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:13.400143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:13.400212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:13.425846 1055021 cri.go:89] found id: ""
	I1208 02:01:13.425872 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.425881 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:13.425887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:13.425949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:13.451450 1055021 cri.go:89] found id: ""
	I1208 02:01:13.451478 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.451487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:13.451493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:13.451554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:13.476315 1055021 cri.go:89] found id: ""
	I1208 02:01:13.476341 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.476350 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:13.476357 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:13.476419 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:13.503320 1055021 cri.go:89] found id: ""
	I1208 02:01:13.503346 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.503355 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:13.503362 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:13.503430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:13.528258 1055021 cri.go:89] found id: ""
	I1208 02:01:13.528290 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.528299 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:13.528306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:13.528375 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:13.553751 1055021 cri.go:89] found id: ""
	I1208 02:01:13.553784 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.553794 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:13.553800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:13.553871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:13.580159 1055021 cri.go:89] found id: ""
	I1208 02:01:13.580183 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.580192 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:13.580200 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:13.580212 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:13.649628 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:13.649678 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.668358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:13.668451 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:13.739767 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:13.739835 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:13.739881 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:13.771646 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:13.771684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.306613 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:16.317302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:16.317372 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:16.343331 1055021 cri.go:89] found id: ""
	I1208 02:01:16.343356 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.343365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:16.343374 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:16.343433 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:16.369486 1055021 cri.go:89] found id: ""
	I1208 02:01:16.369507 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.369516 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:16.369522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:16.369589 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:16.394887 1055021 cri.go:89] found id: ""
	I1208 02:01:16.394911 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.394919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:16.394926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:16.394983 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:16.419429 1055021 cri.go:89] found id: ""
	I1208 02:01:16.419453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.419461 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:16.419467 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:16.419532 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:16.447941 1055021 cri.go:89] found id: ""
	I1208 02:01:16.448014 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.448038 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:16.448060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:16.448137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:16.477380 1055021 cri.go:89] found id: ""
	I1208 02:01:16.477404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.477414 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:16.477420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:16.477479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:16.502633 1055021 cri.go:89] found id: ""
	I1208 02:01:16.502658 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.502667 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:16.502674 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:16.502776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:16.532861 1055021 cri.go:89] found id: ""
	I1208 02:01:16.532886 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.532895 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:16.532904 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:16.532943 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.561207 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:16.561235 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:16.629585 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:16.629623 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:16.647847 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:16.647876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:16.713384 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:16.713404 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:16.713417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.242742 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:19.253432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:19.253496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:19.282053 1055021 cri.go:89] found id: ""
	I1208 02:01:19.282075 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.282091 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:19.282097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:19.282154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:19.317196 1055021 cri.go:89] found id: ""
	I1208 02:01:19.317218 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.317226 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:19.317232 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:19.317291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:19.344133 1055021 cri.go:89] found id: ""
	I1208 02:01:19.344155 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.344164 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:19.344170 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:19.344231 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:19.369544 1055021 cri.go:89] found id: ""
	I1208 02:01:19.369567 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.369576 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:19.369582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:19.369641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:19.394138 1055021 cri.go:89] found id: ""
	I1208 02:01:19.394161 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.394170 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:19.394176 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:19.394234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:19.421882 1055021 cri.go:89] found id: ""
	I1208 02:01:19.421906 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.421915 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:19.421921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:19.421991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:19.447254 1055021 cri.go:89] found id: ""
	I1208 02:01:19.447280 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.447289 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:19.447295 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:19.447359 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:19.471872 1055021 cri.go:89] found id: ""
	I1208 02:01:19.471898 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.471907 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:19.471916 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:19.471929 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:19.537545 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:19.537583 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:19.556105 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:19.556134 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:19.617255 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:19.617275 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:19.617288 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.645378 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:19.645413 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.176988 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:22.187407 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:22.187482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:22.216526 1055021 cri.go:89] found id: ""
	I1208 02:01:22.216551 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.216560 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:22.216567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:22.216629 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:22.241409 1055021 cri.go:89] found id: ""
	I1208 02:01:22.241437 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.241446 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:22.241452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:22.241510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:22.275844 1055021 cri.go:89] found id: ""
	I1208 02:01:22.275873 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.275882 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:22.275888 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:22.275951 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:22.304532 1055021 cri.go:89] found id: ""
	I1208 02:01:22.304560 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.304575 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:22.304582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:22.304640 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:22.347626 1055021 cri.go:89] found id: ""
	I1208 02:01:22.347653 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.347663 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:22.347669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:22.347730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:22.374178 1055021 cri.go:89] found id: ""
	I1208 02:01:22.374205 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.374215 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:22.374221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:22.374280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:22.404202 1055021 cri.go:89] found id: ""
	I1208 02:01:22.404229 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.404238 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:22.404244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:22.404311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:22.429827 1055021 cri.go:89] found id: ""
	I1208 02:01:22.429852 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.429861 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:22.429869 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:22.429880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.461216 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:22.461241 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:22.529595 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:22.529634 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:22.547808 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:22.547841 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:22.614795 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:22.614824 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:22.614836 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.143485 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:25.154329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:25.154413 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:25.180079 1055021 cri.go:89] found id: ""
	I1208 02:01:25.180105 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.180114 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:25.180121 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:25.180180 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:25.204723 1055021 cri.go:89] found id: ""
	I1208 02:01:25.204753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.204761 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:25.204768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:25.204825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:25.229571 1055021 cri.go:89] found id: ""
	I1208 02:01:25.229596 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.229604 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:25.229611 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:25.229669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:25.256859 1055021 cri.go:89] found id: ""
	I1208 02:01:25.256888 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.256896 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:25.256903 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:25.256966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:25.286130 1055021 cri.go:89] found id: ""
	I1208 02:01:25.286159 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.286169 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:25.286175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:25.286240 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:25.316764 1055021 cri.go:89] found id: ""
	I1208 02:01:25.316797 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.316806 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:25.316819 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:25.316888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:25.343685 1055021 cri.go:89] found id: ""
	I1208 02:01:25.343753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.343781 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:25.343795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:25.343874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:25.368793 1055021 cri.go:89] found id: ""
	I1208 02:01:25.368819 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.368828 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:25.368864 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:25.368882 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:25.386567 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:25.386594 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:25.454148 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:25.454180 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:25.454193 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.482372 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:25.482406 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:25.512534 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:25.512561 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.077014 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:28.087810 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:28.087929 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:28.117064 1055021 cri.go:89] found id: ""
	I1208 02:01:28.117090 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.117100 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:28.117107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:28.117166 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:28.142720 1055021 cri.go:89] found id: ""
	I1208 02:01:28.142747 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.142756 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:28.142763 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:28.142820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:28.169323 1055021 cri.go:89] found id: ""
	I1208 02:01:28.169349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.169357 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:28.169364 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:28.169423 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:28.198413 1055021 cri.go:89] found id: ""
	I1208 02:01:28.198441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.198450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:28.198456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:28.198538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:28.222900 1055021 cri.go:89] found id: ""
	I1208 02:01:28.222925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.222935 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:28.222941 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:28.223006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:28.252429 1055021 cri.go:89] found id: ""
	I1208 02:01:28.252453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.252462 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:28.252468 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:28.252528 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:28.285260 1055021 cri.go:89] found id: ""
	I1208 02:01:28.285287 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.285296 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:28.285302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:28.285362 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:28.322093 1055021 cri.go:89] found id: ""
	I1208 02:01:28.322122 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.322131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:28.322140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:28.322151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:28.358086 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:28.358113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.422767 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:28.422811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:28.441151 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:28.441185 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:28.510892 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:28.510919 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:28.510932 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.041345 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:31.056282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:31.056357 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:31.087982 1055021 cri.go:89] found id: ""
	I1208 02:01:31.088007 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.088017 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:31.088023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:31.088086 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:31.113983 1055021 cri.go:89] found id: ""
	I1208 02:01:31.114005 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.114014 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:31.114025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:31.114083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:31.141045 1055021 cri.go:89] found id: ""
	I1208 02:01:31.141069 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.141078 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:31.141085 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:31.141154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:31.167841 1055021 cri.go:89] found id: ""
	I1208 02:01:31.167864 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.167873 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:31.167880 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:31.167937 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:31.193449 1055021 cri.go:89] found id: ""
	I1208 02:01:31.193471 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.193479 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:31.193485 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:31.193542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:31.220825 1055021 cri.go:89] found id: ""
	I1208 02:01:31.220850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.220859 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:31.220865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:31.220926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:31.246036 1055021 cri.go:89] found id: ""
	I1208 02:01:31.246063 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.246071 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:31.246077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:31.246140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:31.282360 1055021 cri.go:89] found id: ""
	I1208 02:01:31.282388 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.282396 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:31.282405 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:31.282416 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:31.351320 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:31.351368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:31.370774 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:31.370887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:31.434743 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:31.434763 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:31.434775 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.462946 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:31.462982 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:33.992261 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:34.004797 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:34.004891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:34.044483 1055021 cri.go:89] found id: ""
	I1208 02:01:34.044506 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.044516 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:34.044523 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:34.044598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:34.072528 1055021 cri.go:89] found id: ""
	I1208 02:01:34.072564 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.072573 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:34.072580 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:34.072654 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:34.102278 1055021 cri.go:89] found id: ""
	I1208 02:01:34.102357 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.102379 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:34.102399 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:34.102487 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:34.129526 1055021 cri.go:89] found id: ""
	I1208 02:01:34.129601 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.129634 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:34.129656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:34.129776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:34.155663 1055021 cri.go:89] found id: ""
	I1208 02:01:34.155689 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.155698 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:34.155704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:34.155777 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:34.186951 1055021 cri.go:89] found id: ""
	I1208 02:01:34.186978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.186988 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:34.186996 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:34.187104 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:34.212379 1055021 cri.go:89] found id: ""
	I1208 02:01:34.212404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.212423 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:34.212430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:34.212489 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:34.238401 1055021 cri.go:89] found id: ""
	I1208 02:01:34.238438 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.238447 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:34.238456 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:34.238468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:34.278895 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:34.278970 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:34.356262 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:34.356303 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:34.376513 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:34.376545 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:34.447804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:34.447829 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:34.447843 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:36.976756 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:36.987574 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:36.987651 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:37.035351 1055021 cri.go:89] found id: ""
	I1208 02:01:37.035376 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.035386 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:37.035393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:37.035457 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:37.065004 1055021 cri.go:89] found id: ""
	I1208 02:01:37.065026 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.065034 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:37.065041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:37.065099 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:37.092804 1055021 cri.go:89] found id: ""
	I1208 02:01:37.092828 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.092837 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:37.092843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:37.092901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:37.117820 1055021 cri.go:89] found id: ""
	I1208 02:01:37.117849 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.117857 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:37.117865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:37.117924 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:37.143955 1055021 cri.go:89] found id: ""
	I1208 02:01:37.143978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.143987 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:37.143993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:37.144055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:37.173740 1055021 cri.go:89] found id: ""
	I1208 02:01:37.173764 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.173772 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:37.173779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:37.173838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:37.202687 1055021 cri.go:89] found id: ""
	I1208 02:01:37.202710 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.202719 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:37.202725 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:37.202786 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:37.229307 1055021 cri.go:89] found id: ""
	I1208 02:01:37.229331 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.229339 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:37.229347 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:37.229360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:37.247500 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:37.247530 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:37.329229 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:37.329252 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:37.329267 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:37.358197 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:37.358238 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:37.387860 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:37.387889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:39.956266 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:39.966752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:39.966823 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:39.991660 1055021 cri.go:89] found id: ""
	I1208 02:01:39.991686 1055021 logs.go:282] 0 containers: []
	W1208 02:01:39.991695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:39.991701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:39.991763 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:40.027823 1055021 cri.go:89] found id: ""
	I1208 02:01:40.027905 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.027928 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:40.027949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:40.028063 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:40.064388 1055021 cri.go:89] found id: ""
	I1208 02:01:40.064464 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.064487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:40.064508 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:40.064594 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:40.094787 1055021 cri.go:89] found id: ""
	I1208 02:01:40.094814 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.094832 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:40.094858 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:40.094922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:40.120620 1055021 cri.go:89] found id: ""
	I1208 02:01:40.120645 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.120654 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:40.120660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:40.120720 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:40.153070 1055021 cri.go:89] found id: ""
	I1208 02:01:40.153097 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.153106 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:40.153112 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:40.153183 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:40.181896 1055021 cri.go:89] found id: ""
	I1208 02:01:40.181925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.181935 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:40.181942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:40.182004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:40.209414 1055021 cri.go:89] found id: ""
	I1208 02:01:40.209441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.209450 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:40.209459 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:40.209470 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:40.274756 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:40.274858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:40.294225 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:40.294364 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:40.365754 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:40.365778 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:40.365791 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:40.394699 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:40.394732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:42.924136 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:42.934800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:42.934894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:42.961825 1055021 cri.go:89] found id: ""
	I1208 02:01:42.961850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.961859 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:42.961867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:42.961927 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:42.988379 1055021 cri.go:89] found id: ""
	I1208 02:01:42.988403 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.988412 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:42.988418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:42.988503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:43.023024 1055021 cri.go:89] found id: ""
	I1208 02:01:43.023047 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.023056 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:43.023063 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:43.023139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:43.057964 1055021 cri.go:89] found id: ""
	I1208 02:01:43.057993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.058001 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:43.058008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:43.058073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:43.088198 1055021 cri.go:89] found id: ""
	I1208 02:01:43.088221 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.088229 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:43.088235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:43.088295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:43.116924 1055021 cri.go:89] found id: ""
	I1208 02:01:43.116950 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.116959 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:43.116965 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:43.117042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:43.143043 1055021 cri.go:89] found id: ""
	I1208 02:01:43.143156 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.143172 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:43.143180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:43.143274 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:43.172524 1055021 cri.go:89] found id: ""
	I1208 02:01:43.172547 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.172556 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:43.172565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:43.172577 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:43.237127 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:43.237162 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:43.256485 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:43.256516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:43.325704 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:43.325725 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:43.325737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:43.354439 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:43.354477 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:45.885598 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:45.896346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:45.896416 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:45.921473 1055021 cri.go:89] found id: ""
	I1208 02:01:45.921499 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.921508 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:45.921515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:45.921576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:45.945701 1055021 cri.go:89] found id: ""
	I1208 02:01:45.945725 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.945734 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:45.945740 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:45.945800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:45.973191 1055021 cri.go:89] found id: ""
	I1208 02:01:45.973213 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.973222 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:45.973228 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:45.973289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:45.999665 1055021 cri.go:89] found id: ""
	I1208 02:01:45.999741 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.999764 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:45.999782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:45.999872 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:46.041104 1055021 cri.go:89] found id: ""
	I1208 02:01:46.041176 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.041202 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:46.041224 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:46.041300 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:46.076259 1055021 cri.go:89] found id: ""
	I1208 02:01:46.076332 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.076355 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:46.076373 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:46.076450 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:46.108098 1055021 cri.go:89] found id: ""
	I1208 02:01:46.108163 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.108179 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:46.108186 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:46.108247 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:46.134928 1055021 cri.go:89] found id: ""
	I1208 02:01:46.134964 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.134974 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:46.134983 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:46.134995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:46.164421 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:46.164498 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:46.233311 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:46.233358 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:46.253422 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:46.253502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:46.336577 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:46.336600 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:46.336614 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:48.865787 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:48.876567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:48.876642 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:48.901147 1055021 cri.go:89] found id: ""
	I1208 02:01:48.901177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.901185 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:48.901192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:48.901250 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:48.927326 1055021 cri.go:89] found id: ""
	I1208 02:01:48.927351 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.927360 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:48.927366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:48.927424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:48.951970 1055021 cri.go:89] found id: ""
	I1208 02:01:48.951994 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.952003 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:48.952009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:48.952073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:48.976700 1055021 cri.go:89] found id: ""
	I1208 02:01:48.976724 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.976732 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:48.976739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:48.976796 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:49.005321 1055021 cri.go:89] found id: ""
	I1208 02:01:49.005349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.005359 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:49.005366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:49.005432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:49.045336 1055021 cri.go:89] found id: ""
	I1208 02:01:49.045359 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.045368 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:49.045397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:49.045478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:49.074970 1055021 cri.go:89] found id: ""
	I1208 02:01:49.074997 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.075006 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:49.075012 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:49.075070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:49.100757 1055021 cri.go:89] found id: ""
	I1208 02:01:49.100780 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.100788 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:49.100796 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:49.100808 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:49.165827 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:49.165862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:49.183539 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:49.183618 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:49.249850 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:49.249874 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:49.249887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:49.280238 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:49.280270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:51.819515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:51.830251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:51.830329 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:51.856077 1055021 cri.go:89] found id: ""
	I1208 02:01:51.856098 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.856107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:51.856113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:51.856170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:51.882057 1055021 cri.go:89] found id: ""
	I1208 02:01:51.882086 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.882096 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:51.882103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:51.882170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:51.908531 1055021 cri.go:89] found id: ""
	I1208 02:01:51.908572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.908582 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:51.908588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:51.908649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:51.933571 1055021 cri.go:89] found id: ""
	I1208 02:01:51.933594 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.933603 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:51.933610 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:51.933671 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:51.959716 1055021 cri.go:89] found id: ""
	I1208 02:01:51.959777 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.959800 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:51.959825 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:51.959903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:51.985320 1055021 cri.go:89] found id: ""
	I1208 02:01:51.985384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.985409 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:51.985427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:51.985507 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:52.029640 1055021 cri.go:89] found id: ""
	I1208 02:01:52.029709 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.029736 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:52.029756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:52.029835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:52.060725 1055021 cri.go:89] found id: ""
	I1208 02:01:52.060803 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.060826 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:52.060848 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:52.060874 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:52.129431 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:52.129468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:52.148064 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:52.148095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:52.220103 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:52.220125 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:52.220137 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:52.248853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:52.248892 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:54.781319 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:54.791942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:54.792009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:54.816799 1055021 cri.go:89] found id: ""
	I1208 02:01:54.816821 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.816830 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:54.816835 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:54.816893 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:54.846002 1055021 cri.go:89] found id: ""
	I1208 02:01:54.846028 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.846036 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:54.846043 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:54.846101 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:54.870704 1055021 cri.go:89] found id: ""
	I1208 02:01:54.870729 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.870737 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:54.870744 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:54.870807 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:54.897236 1055021 cri.go:89] found id: ""
	I1208 02:01:54.897302 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.897327 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:54.897347 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:54.897432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:54.921729 1055021 cri.go:89] found id: ""
	I1208 02:01:54.921754 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.921763 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:54.921769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:54.921830 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:54.949586 1055021 cri.go:89] found id: ""
	I1208 02:01:54.949610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.949619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:54.949626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:54.949687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:54.976595 1055021 cri.go:89] found id: ""
	I1208 02:01:54.976618 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.976627 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:54.976633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:54.976708 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:55.012149 1055021 cri.go:89] found id: ""
	I1208 02:01:55.012179 1055021 logs.go:282] 0 containers: []
	W1208 02:01:55.012188 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:55.012198 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:55.012211 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:55.089182 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:55.089225 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:55.107781 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:55.107811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:55.175880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:55.175942 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:55.175962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:55.205060 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:55.205095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:57.733634 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:57.744236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:57.744308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:57.769149 1055021 cri.go:89] found id: ""
	I1208 02:01:57.769173 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.769182 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:57.769188 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:57.769246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:57.796831 1055021 cri.go:89] found id: ""
	I1208 02:01:57.796860 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.796869 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:57.796876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:57.796932 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:57.821809 1055021 cri.go:89] found id: ""
	I1208 02:01:57.821834 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.821844 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:57.821850 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:57.821917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:57.849385 1055021 cri.go:89] found id: ""
	I1208 02:01:57.849410 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.849418 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:57.849424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:57.849481 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:57.874645 1055021 cri.go:89] found id: ""
	I1208 02:01:57.874669 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.874678 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:57.874684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:57.874742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:57.899500 1055021 cri.go:89] found id: ""
	I1208 02:01:57.899572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.899608 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:57.899623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:57.899695 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:57.926677 1055021 cri.go:89] found id: ""
	I1208 02:01:57.926711 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.926720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:57.926727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:57.926833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:57.952159 1055021 cri.go:89] found id: ""
	I1208 02:01:57.952233 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.952249 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:57.952259 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:57.952271 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:58.017945 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:58.018082 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:58.036702 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:58.036877 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:58.109217 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:58.109239 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:58.109252 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:58.137424 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:58.137460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:00.669211 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:00.679729 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:00.679803 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:00.704116 1055021 cri.go:89] found id: ""
	I1208 02:02:00.704140 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.704149 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:00.704156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:00.704220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:00.728883 1055021 cri.go:89] found id: ""
	I1208 02:02:00.728908 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.728917 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:00.728923 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:00.728984 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:00.757361 1055021 cri.go:89] found id: ""
	I1208 02:02:00.757437 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.757453 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:00.757461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:00.757523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:00.784303 1055021 cri.go:89] found id: ""
	I1208 02:02:00.784332 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.784342 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:00.784349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:00.784420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:00.814794 1055021 cri.go:89] found id: ""
	I1208 02:02:00.814818 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.814827 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:00.814833 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:00.814915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:00.840985 1055021 cri.go:89] found id: ""
	I1208 02:02:00.841052 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.841069 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:00.841077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:00.841140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:00.869242 1055021 cri.go:89] found id: ""
	I1208 02:02:00.869268 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.869277 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:00.869283 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:00.869348 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:00.895515 1055021 cri.go:89] found id: ""
	I1208 02:02:00.895540 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.895549 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:00.895557 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:00.895600 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:00.963574 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:00.963611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:00.981868 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:00.981900 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:01.074452 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:01.074541 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:01.074602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:01.107635 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:01.107672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:03.643395 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:03.654301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:03.654370 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:03.680571 1055021 cri.go:89] found id: ""
	I1208 02:02:03.680609 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.680619 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:03.680626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:03.680696 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:03.709419 1055021 cri.go:89] found id: ""
	I1208 02:02:03.709444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.709453 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:03.709459 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:03.709518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:03.736028 1055021 cri.go:89] found id: ""
	I1208 02:02:03.736064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.736073 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:03.736079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:03.736140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:03.760906 1055021 cri.go:89] found id: ""
	I1208 02:02:03.760983 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.761005 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:03.761019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:03.761095 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:03.789527 1055021 cri.go:89] found id: ""
	I1208 02:02:03.789563 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.789572 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:03.789578 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:03.789655 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:03.817176 1055021 cri.go:89] found id: ""
	I1208 02:02:03.817203 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.817211 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:03.817218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:03.817277 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:03.847025 1055021 cri.go:89] found id: ""
	I1208 02:02:03.847053 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.847063 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:03.847070 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:03.847161 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:03.872945 1055021 cri.go:89] found id: ""
	I1208 02:02:03.872972 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.872981 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:03.872990 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:03.873002 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:03.938890 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:03.938927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:03.956669 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:03.956699 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:04.047856 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:04.047931 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:04.047960 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:04.084291 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:04.084328 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:06.621579 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:06.632180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:06.632262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:06.658187 1055021 cri.go:89] found id: ""
	I1208 02:02:06.658214 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.658223 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:06.658230 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:06.658289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:06.683455 1055021 cri.go:89] found id: ""
	I1208 02:02:06.683479 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.683487 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:06.683494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:06.683555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:06.709121 1055021 cri.go:89] found id: ""
	I1208 02:02:06.709147 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.709156 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:06.709162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:06.709220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:06.735601 1055021 cri.go:89] found id: ""
	I1208 02:02:06.735639 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.735649 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:06.735655 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:06.735717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:06.761793 1055021 cri.go:89] found id: ""
	I1208 02:02:06.761817 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.761826 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:06.761832 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:06.761891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:06.787053 1055021 cri.go:89] found id: ""
	I1208 02:02:06.787075 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.787092 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:06.787099 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:06.787168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:06.815964 1055021 cri.go:89] found id: ""
	I1208 02:02:06.815990 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.815999 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:06.816006 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:06.816067 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:06.841508 1055021 cri.go:89] found id: ""
	I1208 02:02:06.841534 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.841543 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:06.841552 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:06.841564 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:06.906588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:06.906627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:06.925347 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:06.925380 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:07.004820 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:07.004851 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:07.004865 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:07.038308 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:07.038348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.573053 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:09.583792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:09.583864 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:09.611232 1055021 cri.go:89] found id: ""
	I1208 02:02:09.611255 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.611265 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:09.611271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:09.611340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:09.636029 1055021 cri.go:89] found id: ""
	I1208 02:02:09.636054 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.636063 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:09.636069 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:09.636127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:09.662307 1055021 cri.go:89] found id: ""
	I1208 02:02:09.662334 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.662344 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:09.662350 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:09.662430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:09.688279 1055021 cri.go:89] found id: ""
	I1208 02:02:09.688304 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.688314 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:09.688320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:09.688385 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:09.717056 1055021 cri.go:89] found id: ""
	I1208 02:02:09.717081 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.717090 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:09.717097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:09.717206 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:09.745719 1055021 cri.go:89] found id: ""
	I1208 02:02:09.745744 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.745753 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:09.745760 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:09.745820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:09.774995 1055021 cri.go:89] found id: ""
	I1208 02:02:09.775020 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.775029 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:09.775035 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:09.775107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:09.800142 1055021 cri.go:89] found id: ""
	I1208 02:02:09.800165 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.800174 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:09.800183 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:09.800196 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:09.817474 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:09.817504 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:09.881166 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:09.881188 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:09.881201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:09.909282 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:09.909316 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.936890 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:09.936917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:12.504767 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:12.517010 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:12.517087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:12.552375 1055021 cri.go:89] found id: ""
	I1208 02:02:12.552405 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.552414 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:12.552421 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:12.552484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:12.581970 1055021 cri.go:89] found id: ""
	I1208 02:02:12.581993 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.582002 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:12.582008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:12.582070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:12.609191 1055021 cri.go:89] found id: ""
	I1208 02:02:12.609215 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.609223 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:12.609229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:12.609289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:12.634872 1055021 cri.go:89] found id: ""
	I1208 02:02:12.634900 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.634909 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:12.634917 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:12.634977 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:12.660600 1055021 cri.go:89] found id: ""
	I1208 02:02:12.660622 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.660631 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:12.660637 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:12.660698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:12.686371 1055021 cri.go:89] found id: ""
	I1208 02:02:12.686394 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.686402 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:12.686409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:12.686468 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:12.711549 1055021 cri.go:89] found id: ""
	I1208 02:02:12.711574 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.711583 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:12.711589 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:12.711650 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:12.736572 1055021 cri.go:89] found id: ""
	I1208 02:02:12.736599 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.736609 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:12.736619 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:12.736631 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:12.754919 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:12.754947 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:12.825472 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:12.825494 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:12.825508 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:12.854189 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:12.854226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:12.881205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:12.881233 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:15.446588 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:15.457588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:15.457660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:15.482738 1055021 cri.go:89] found id: ""
	I1208 02:02:15.482763 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.482772 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:15.482779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:15.482877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:15.511332 1055021 cri.go:89] found id: ""
	I1208 02:02:15.511364 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.511373 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:15.511380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:15.511446 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:15.555502 1055021 cri.go:89] found id: ""
	I1208 02:02:15.555528 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.555537 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:15.555543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:15.555604 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:15.584568 1055021 cri.go:89] found id: ""
	I1208 02:02:15.584590 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.584598 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:15.584604 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:15.584662 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:15.613196 1055021 cri.go:89] found id: ""
	I1208 02:02:15.613219 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.613228 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:15.613234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:15.613299 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:15.642375 1055021 cri.go:89] found id: ""
	I1208 02:02:15.642396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.642404 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:15.642411 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:15.642469 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:15.666701 1055021 cri.go:89] found id: ""
	I1208 02:02:15.666724 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.666733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:15.666739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:15.666804 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:15.694203 1055021 cri.go:89] found id: ""
	I1208 02:02:15.694226 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.694235 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:15.694244 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:15.694256 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:15.711985 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:15.712018 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:15.783845 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:15.783867 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:15.783880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:15.812138 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:15.812172 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:15.841785 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:15.841815 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.407879 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:18.418616 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:18.418687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:18.452125 1055021 cri.go:89] found id: ""
	I1208 02:02:18.452149 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.452158 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:18.452165 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:18.452226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:18.484590 1055021 cri.go:89] found id: ""
	I1208 02:02:18.484618 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.484627 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:18.484633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:18.484693 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:18.521073 1055021 cri.go:89] found id: ""
	I1208 02:02:18.521101 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.521111 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:18.521117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:18.521195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:18.552106 1055021 cri.go:89] found id: ""
	I1208 02:02:18.552131 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.552142 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:18.552149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:18.552234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:18.583000 1055021 cri.go:89] found id: ""
	I1208 02:02:18.583026 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.583034 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:18.583041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:18.583108 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:18.608873 1055021 cri.go:89] found id: ""
	I1208 02:02:18.608901 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.608909 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:18.608916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:18.608975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:18.638459 1055021 cri.go:89] found id: ""
	I1208 02:02:18.638482 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.638491 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:18.638497 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:18.638554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:18.664652 1055021 cri.go:89] found id: ""
	I1208 02:02:18.664678 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.664687 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:18.664696 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:18.664708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:18.727887 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:18.727909 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:18.727922 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:18.756733 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:18.756768 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:18.784791 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:18.784819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.854704 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:18.854747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.373144 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:21.384002 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:21.384076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:21.408827 1055021 cri.go:89] found id: ""
	I1208 02:02:21.408851 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.408860 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:21.408866 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:21.408926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:21.437335 1055021 cri.go:89] found id: ""
	I1208 02:02:21.437366 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.437375 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:21.437380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:21.437440 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:21.461726 1055021 cri.go:89] found id: ""
	I1208 02:02:21.461753 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.461762 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:21.461768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:21.461827 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:21.486068 1055021 cri.go:89] found id: ""
	I1208 02:02:21.486095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.486104 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:21.486110 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:21.486168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:21.521646 1055021 cri.go:89] found id: ""
	I1208 02:02:21.521671 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.521679 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:21.521686 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:21.521754 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:21.549687 1055021 cri.go:89] found id: ""
	I1208 02:02:21.549714 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.549723 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:21.549730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:21.549789 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:21.584524 1055021 cri.go:89] found id: ""
	I1208 02:02:21.584600 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.584615 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:21.584623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:21.584686 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:21.613834 1055021 cri.go:89] found id: ""
	I1208 02:02:21.613859 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.613868 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:21.613877 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:21.613888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:21.679269 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:21.679305 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.696894 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:21.696924 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:21.763490 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:21.763525 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:21.763538 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:21.791788 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:21.791819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.320943 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:24.332441 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:24.332511 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:24.359381 1055021 cri.go:89] found id: ""
	I1208 02:02:24.359403 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.359412 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:24.359418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:24.359484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:24.385766 1055021 cri.go:89] found id: ""
	I1208 02:02:24.385789 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.385798 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:24.385804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:24.385870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:24.412597 1055021 cri.go:89] found id: ""
	I1208 02:02:24.412619 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.412633 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:24.412640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:24.412700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:24.438239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.438262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.438270 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:24.438277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:24.438336 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:24.465529 1055021 cri.go:89] found id: ""
	I1208 02:02:24.465551 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.465560 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:24.465566 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:24.465628 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:24.490130 1055021 cri.go:89] found id: ""
	I1208 02:02:24.490153 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.490162 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:24.490168 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:24.490228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:24.531239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.531262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.531271 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:24.531277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:24.531335 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:24.570624 1055021 cri.go:89] found id: ""
	I1208 02:02:24.570646 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.570654 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:24.570663 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:24.570676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:24.588822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:24.588852 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:24.650804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:24.650826 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:24.650858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:24.680022 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:24.680060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.708316 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:24.708352 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.274217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:27.287664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:27.287788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:27.318113 1055021 cri.go:89] found id: ""
	I1208 02:02:27.318193 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.318215 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:27.318234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:27.318332 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:27.344915 1055021 cri.go:89] found id: ""
	I1208 02:02:27.344943 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.344951 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:27.344958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:27.345024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:27.374469 1055021 cri.go:89] found id: ""
	I1208 02:02:27.374502 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.374512 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:27.374519 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:27.374588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:27.399626 1055021 cri.go:89] found id: ""
	I1208 02:02:27.399665 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.399674 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:27.399680 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:27.399753 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:27.429184 1055021 cri.go:89] found id: ""
	I1208 02:02:27.429222 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.429230 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:27.429236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:27.429303 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:27.453872 1055021 cri.go:89] found id: ""
	I1208 02:02:27.453910 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.453919 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:27.453926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:27.453996 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:27.479093 1055021 cri.go:89] found id: ""
	I1208 02:02:27.479117 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.479127 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:27.479134 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:27.479195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:27.513793 1055021 cri.go:89] found id: ""
	I1208 02:02:27.513820 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.513840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:27.513849 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:27.513862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:27.543879 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:27.543958 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:27.585714 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:27.585783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.651465 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:27.651502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:27.669169 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:27.669201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:27.732840 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.233103 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:30.244434 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:30.244504 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:30.286359 1055021 cri.go:89] found id: ""
	I1208 02:02:30.286381 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.286390 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:30.286396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:30.286455 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:30.317925 1055021 cri.go:89] found id: ""
	I1208 02:02:30.317947 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.317955 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:30.317960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:30.318020 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:30.352522 1055021 cri.go:89] found id: ""
	I1208 02:02:30.352543 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.352551 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:30.352557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:30.352619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:30.376895 1055021 cri.go:89] found id: ""
	I1208 02:02:30.376917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.376925 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:30.376932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:30.376989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:30.401457 1055021 cri.go:89] found id: ""
	I1208 02:02:30.401478 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.401487 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:30.401493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:30.401551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:30.428269 1055021 cri.go:89] found id: ""
	I1208 02:02:30.428291 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.428300 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:30.428306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:30.428366 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:30.452846 1055021 cri.go:89] found id: ""
	I1208 02:02:30.452869 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.452878 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:30.452884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:30.452946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:30.477617 1055021 cri.go:89] found id: ""
	I1208 02:02:30.477645 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.477655 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:30.477665 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:30.477676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:30.507758 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:30.507782 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:30.577724 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:30.577802 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:30.598108 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:30.598136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:30.663869 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.663892 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:30.663905 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.192012 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:33.202802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:33.202903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:33.229607 1055021 cri.go:89] found id: ""
	I1208 02:02:33.229629 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.229638 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:33.229645 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:33.229704 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:33.257802 1055021 cri.go:89] found id: ""
	I1208 02:02:33.257837 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.257847 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:33.257854 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:33.257913 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:33.289073 1055021 cri.go:89] found id: ""
	I1208 02:02:33.289095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.289103 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:33.289113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:33.289171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:33.317039 1055021 cri.go:89] found id: ""
	I1208 02:02:33.317060 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.317069 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:33.317075 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:33.317137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:33.342479 1055021 cri.go:89] found id: ""
	I1208 02:02:33.342500 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.342509 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:33.342515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:33.342577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:33.367849 1055021 cri.go:89] found id: ""
	I1208 02:02:33.367877 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.367886 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:33.367892 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:33.367950 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:33.393711 1055021 cri.go:89] found id: ""
	I1208 02:02:33.393739 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.393748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:33.393755 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:33.393818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:33.419264 1055021 cri.go:89] found id: ""
	I1208 02:02:33.419286 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.419295 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:33.419303 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:33.419320 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.446586 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:33.446620 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:33.474605 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:33.474633 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:33.546521 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:33.546562 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:33.567522 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:33.567553 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:33.633164 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.133387 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:36.145051 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:36.145130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:36.178396 1055021 cri.go:89] found id: ""
	I1208 02:02:36.178426 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.178434 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:36.178442 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:36.178500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:36.204662 1055021 cri.go:89] found id: ""
	I1208 02:02:36.204685 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.204694 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:36.204700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:36.204758 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:36.233744 1055021 cri.go:89] found id: ""
	I1208 02:02:36.233766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.233776 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:36.233782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:36.233844 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:36.271413 1055021 cri.go:89] found id: ""
	I1208 02:02:36.271436 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.271445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:36.271453 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:36.271518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:36.299867 1055021 cri.go:89] found id: ""
	I1208 02:02:36.299889 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.299898 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:36.299905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:36.299967 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:36.333748 1055021 cri.go:89] found id: ""
	I1208 02:02:36.333771 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.333779 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:36.333786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:36.333877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:36.359920 1055021 cri.go:89] found id: ""
	I1208 02:02:36.359944 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.359953 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:36.359959 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:36.360016 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:36.384561 1055021 cri.go:89] found id: ""
	I1208 02:02:36.384583 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.384592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:36.384600 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:36.384611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:36.449118 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:36.449153 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:36.469510 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:36.469537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:36.544911 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.544934 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:36.544972 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:36.577604 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:36.577640 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.106569 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:39.117314 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:39.117406 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:39.147330 1055021 cri.go:89] found id: ""
	I1208 02:02:39.147354 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.147362 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:39.147369 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:39.147429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:39.175702 1055021 cri.go:89] found id: ""
	I1208 02:02:39.175725 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.175733 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:39.175739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:39.175797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:39.209892 1055021 cri.go:89] found id: ""
	I1208 02:02:39.209917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.209926 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:39.209932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:39.209990 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:39.235210 1055021 cri.go:89] found id: ""
	I1208 02:02:39.235239 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.235248 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:39.235255 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:39.235312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:39.268421 1055021 cri.go:89] found id: ""
	I1208 02:02:39.268444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.268453 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:39.268460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:39.268520 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:39.308045 1055021 cri.go:89] found id: ""
	I1208 02:02:39.308070 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.308079 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:39.308086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:39.308152 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:39.338659 1055021 cri.go:89] found id: ""
	I1208 02:02:39.338684 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.338693 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:39.338699 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:39.338759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:39.369373 1055021 cri.go:89] found id: ""
	I1208 02:02:39.369396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.369405 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:39.369414 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:39.369426 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.401929 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:39.401959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:39.466665 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:39.466705 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:39.484758 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:39.484786 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:39.570718 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:39.570737 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:39.570750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.101949 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:42.135199 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:42.135361 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:42.190279 1055021 cri.go:89] found id: ""
	I1208 02:02:42.190367 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.190393 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:42.190415 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:42.190545 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:42.222777 1055021 cri.go:89] found id: ""
	I1208 02:02:42.222883 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.222911 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:42.222934 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:42.223043 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:42.257086 1055021 cri.go:89] found id: ""
	I1208 02:02:42.257169 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.257193 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:42.257217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:42.257340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:42.290338 1055021 cri.go:89] found id: ""
	I1208 02:02:42.290421 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.290445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:42.290464 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:42.290571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:42.321497 1055021 cri.go:89] found id: ""
	I1208 02:02:42.321567 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.321592 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:42.321612 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:42.321710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:42.351037 1055021 cri.go:89] found id: ""
	I1208 02:02:42.351157 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.351184 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:42.351205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:42.351308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:42.377225 1055021 cri.go:89] found id: ""
	I1208 02:02:42.377251 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.377259 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:42.377266 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:42.377324 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:42.403038 1055021 cri.go:89] found id: ""
	I1208 02:02:42.403064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.403073 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:42.403117 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:42.403130 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:42.468670 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:42.468709 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:42.486822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:42.486906 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:42.576804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:42.576828 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:42.576844 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.609307 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:42.609345 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:45.139048 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:45.153298 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:45.153393 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:45.190816 1055021 cri.go:89] found id: ""
	I1208 02:02:45.190864 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.190874 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:45.190882 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:45.190954 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:45.248053 1055021 cri.go:89] found id: ""
	I1208 02:02:45.248087 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.248097 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:45.248105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:45.248178 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:45.291403 1055021 cri.go:89] found id: ""
	I1208 02:02:45.291441 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.291506 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:45.291539 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:45.291685 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:45.327809 1055021 cri.go:89] found id: ""
	I1208 02:02:45.327885 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.327907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:45.327925 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:45.328011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:45.356269 1055021 cri.go:89] found id: ""
	I1208 02:02:45.356293 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.356302 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:45.356308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:45.356386 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:45.385189 1055021 cri.go:89] found id: ""
	I1208 02:02:45.385213 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.385222 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:45.385229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:45.385309 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:45.413524 1055021 cri.go:89] found id: ""
	I1208 02:02:45.413549 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.413558 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:45.413565 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:45.413652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:45.443469 1055021 cri.go:89] found id: ""
	I1208 02:02:45.443547 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.443563 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:45.443572 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:45.443584 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:45.515350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:45.515441 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:45.534931 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:45.534961 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:45.612239 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:45.612262 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:45.612274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:45.640465 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:45.640503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.170309 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:48.181762 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:48.181835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:48.209264 1055021 cri.go:89] found id: ""
	I1208 02:02:48.209288 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.209297 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:48.209303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:48.209364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:48.236743 1055021 cri.go:89] found id: ""
	I1208 02:02:48.236766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.236775 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:48.236782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:48.236847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:48.275731 1055021 cri.go:89] found id: ""
	I1208 02:02:48.275757 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.275765 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:48.275772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:48.275837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:48.311639 1055021 cri.go:89] found id: ""
	I1208 02:02:48.311667 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.311676 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:48.311682 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:48.311744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:48.342675 1055021 cri.go:89] found id: ""
	I1208 02:02:48.342711 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.342720 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:48.342726 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:48.342808 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:48.369485 1055021 cri.go:89] found id: ""
	I1208 02:02:48.369519 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.369528 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:48.369535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:48.369608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:48.396744 1055021 cri.go:89] found id: ""
	I1208 02:02:48.396769 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.396778 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:48.396785 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:48.396847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:48.422870 1055021 cri.go:89] found id: ""
	I1208 02:02:48.422894 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.422904 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:48.422913 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:48.422927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.454409 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:48.454482 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:48.522366 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:48.522456 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:48.541233 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:48.541391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:48.617160 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:48.617226 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:48.617247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:51.146382 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:51.160619 1055021 out.go:203] 
	W1208 02:02:51.163425 1055021 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 02:02:51.163473 1055021 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 02:02:51.163484 1055021 out.go:285] * Related issues:
	* Related issues:
	W1208 02:02:51.163498 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1208 02:02:51.163517 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1208 02:02:51.166282 1055021 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-448023
helpers_test.go:243: (dbg) docker inspect newest-cni-448023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	        "Created": "2025-12-08T01:46:34.353152924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1055155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:56:41.277432033Z",
	            "FinishedAt": "2025-12-08T01:56:39.892982826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hosts",
	        "LogPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9-json.log",
	        "Name": "/newest-cni-448023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-448023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-448023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	                "LowerDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-448023",
	                "Source": "/var/lib/docker/volumes/newest-cni-448023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-448023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-448023",
	                "name.minikube.sigs.k8s.io": "newest-cni-448023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "813118b42480babba062786ba0ba8ff3e7452eec7c2d8f800688d8fd68359617",
	            "SandboxKey": "/var/run/docker/netns/813118b42480",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-448023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:9d:8d:8a:21:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec5af7f0fdbc70a95f83d97d8a04145286c7acd7e864f0f850cd22983b469ab7",
	                    "EndpointID": "577f657908aa7f309cdfc5d98526f00d0b1c5b25cb769be3035b9f923a1c6bf3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-448023",
	                        "ff1a1ad3010f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (367.864088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25: (1.595762369s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ embed-certs-172173 image list --format=json                                                                                                                                                                                                          │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:54 UTC │                     │
	│ stop    │ -p newest-cni-448023 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p newest-cni-448023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:56:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:56:40.995814 1055021 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:56:40.995993 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996024 1055021 out.go:374] Setting ErrFile to fd 2...
	I1208 01:56:40.996044 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996297 1055021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:56:40.996698 1055021 out.go:368] Setting JSON to false
	I1208 01:56:40.997651 1055021 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23933,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:56:40.997760 1055021 start.go:143] virtualization:  
	I1208 01:56:41.000930 1055021 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:56:41.005767 1055021 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:56:41.005958 1055021 notify.go:221] Checking for updates...
	I1208 01:56:41.009547 1055021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:56:41.012698 1055021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:41.016029 1055021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:56:41.019114 1055021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:56:41.022081 1055021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:56:41.025425 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:41.026092 1055021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:56:41.062956 1055021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:56:41.063137 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.133740 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.124579493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.133841 1055021 docker.go:319] overlay module found
	I1208 01:56:41.136922 1055021 out.go:179] * Using the docker driver based on existing profile
	I1208 01:56:41.139812 1055021 start.go:309] selected driver: docker
	I1208 01:56:41.139836 1055021 start.go:927] validating driver "docker" against &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.139955 1055021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:56:41.140671 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.193763 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.183682659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.194162 1055021 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:56:41.194196 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:41.194260 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:41.194313 1055021 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.197698 1055021 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:56:41.200489 1055021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:56:41.203470 1055021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:56:41.206341 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:41.206393 1055021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:56:41.206406 1055021 cache.go:65] Caching tarball of preloaded images
	I1208 01:56:41.206414 1055021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:56:41.206514 1055021 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:56:41.206524 1055021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:56:41.206659 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.226393 1055021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:56:41.226417 1055021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:56:41.226437 1055021 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:56:41.226470 1055021 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:56:41.226539 1055021 start.go:364] duration metric: took 45.818µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:56:41.226562 1055021 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:56:41.226569 1055021 fix.go:54] fixHost starting: 
	I1208 01:56:41.226872 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.244524 1055021 fix.go:112] recreateIfNeeded on newest-cni-448023: state=Stopped err=<nil>
	W1208 01:56:41.244564 1055021 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:56:42.018560 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:44.518581 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:41.247746 1055021 out.go:252] * Restarting existing docker container for "newest-cni-448023" ...
	I1208 01:56:41.247847 1055021 cli_runner.go:164] Run: docker start newest-cni-448023
	I1208 01:56:41.505835 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.523362 1055021 kic.go:430] container "newest-cni-448023" state is running.
	I1208 01:56:41.523773 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:41.545536 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.545777 1055021 machine.go:94] provisionDockerMachine start ...
	I1208 01:56:41.545848 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:41.570998 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:41.571328 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:41.571336 1055021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:56:41.572041 1055021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:56:44.722629 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.722658 1055021 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:56:44.722733 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.743562 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.743889 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.743906 1055021 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:56:44.912657 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.912755 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.930550 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.930902 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.930926 1055021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:56:45.125086 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:56:45.125166 1055021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:56:45.125215 1055021 ubuntu.go:190] setting up certificates
	I1208 01:56:45.125242 1055021 provision.go:84] configureAuth start
	I1208 01:56:45.125340 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:45.146934 1055021 provision.go:143] copyHostCerts
	I1208 01:56:45.147071 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:56:45.147086 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:56:45.147185 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:56:45.147315 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:56:45.147333 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:56:45.147379 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:56:45.147450 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:56:45.147463 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:56:45.147494 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:56:45.147561 1055021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:56:45.319641 1055021 provision.go:177] copyRemoteCerts
	I1208 01:56:45.319718 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:56:45.319771 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.338151 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.446957 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:56:45.464534 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:56:45.481634 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:56:45.499110 1055021 provision.go:87] duration metric: took 373.83191ms to configureAuth
	I1208 01:56:45.499137 1055021 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:56:45.499354 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:45.499462 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.519312 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:45.520323 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:45.520348 1055021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:56:45.838649 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:56:45.838675 1055021 machine.go:97] duration metric: took 4.292880237s to provisionDockerMachine
	I1208 01:56:45.838688 1055021 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:56:45.838701 1055021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:56:45.838764 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:56:45.838808 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.856107 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.962864 1055021 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:56:45.966280 1055021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:56:45.966310 1055021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:56:45.966321 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:56:45.966376 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:56:45.966455 1055021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:56:45.966565 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:56:45.973812 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:45.990960 1055021 start.go:296] duration metric: took 152.256258ms for postStartSetup
	I1208 01:56:45.991062 1055021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:56:45.991102 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.010295 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.111994 1055021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:56:46.116921 1055021 fix.go:56] duration metric: took 4.890342951s for fixHost
	I1208 01:56:46.116949 1055021 start.go:83] releasing machines lock for "newest-cni-448023", held for 4.89039814s
	I1208 01:56:46.117023 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:46.133998 1055021 ssh_runner.go:195] Run: cat /version.json
	I1208 01:56:46.134053 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.134086 1055021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:56:46.134143 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.155007 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.157578 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.259943 1055021 ssh_runner.go:195] Run: systemctl --version
	I1208 01:56:46.363782 1055021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:56:46.401418 1055021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:56:46.405895 1055021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:56:46.406027 1055021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:56:46.414120 1055021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:56:46.414145 1055021 start.go:496] detecting cgroup driver to use...
	I1208 01:56:46.414178 1055021 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:56:46.414240 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:56:46.430116 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:56:46.443306 1055021 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:56:46.443370 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:56:46.459228 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:56:46.472250 1055021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:56:46.583643 1055021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:56:46.702836 1055021 docker.go:234] disabling docker service ...
	I1208 01:56:46.702974 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:56:46.718081 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:56:46.731165 1055021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:56:46.841278 1055021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:56:46.959396 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:56:46.972986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:56:46.988672 1055021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:56:46.988773 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:46.998541 1055021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:56:46.998635 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.012333 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.022719 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.033036 1055021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:56:47.042410 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.053356 1055021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.066055 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.076106 1055021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:56:47.083610 1055021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:56:47.090937 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.204760 1055021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:56:47.377268 1055021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:56:47.377383 1055021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:56:47.381048 1055021 start.go:564] Will wait 60s for crictl version
	I1208 01:56:47.381161 1055021 ssh_runner.go:195] Run: which crictl
	I1208 01:56:47.384529 1055021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:56:47.407415 1055021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:56:47.407590 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.438310 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.480028 1055021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:56:47.482931 1055021 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:56:47.498300 1055021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:56:47.502114 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.515024 1055021 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:56:47.517850 1055021 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:56:47.518007 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:47.518083 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.554783 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.554810 1055021 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:56:47.554891 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.580370 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.580396 1055021 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:56:47.580404 1055021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:56:47.580497 1055021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:56:47.580581 1055021 ssh_runner.go:195] Run: crio config
	I1208 01:56:47.630652 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:47.630677 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:47.630697 1055021 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:56:47.630720 1055021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:56:47.630943 1055021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:56:47.631027 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:56:47.638867 1055021 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:56:47.638960 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:56:47.646535 1055021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:56:47.659466 1055021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:56:47.672488 1055021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:56:47.685612 1055021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:56:47.689373 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.699289 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.852921 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:47.877101 1055021 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:56:47.877130 1055021 certs.go:195] generating shared ca certs ...
	I1208 01:56:47.877147 1055021 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:47.877305 1055021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:56:47.877358 1055021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:56:47.877370 1055021 certs.go:257] generating profile certs ...
	I1208 01:56:47.877482 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:56:47.877551 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:56:47.877603 1055021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:56:47.877731 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:56:47.877771 1055021 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:56:47.877792 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:56:47.877831 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:56:47.877859 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:56:47.877890 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:56:47.877943 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:47.879217 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:56:47.903514 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:56:47.922072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:56:47.939555 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:56:47.956891 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:56:47.976072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:56:47.994485 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:56:48.016256 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:56:48.036003 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:56:48.058425 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:56:48.078107 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:56:48.096426 1055021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:56:48.110183 1055021 ssh_runner.go:195] Run: openssl version
	I1208 01:56:48.117292 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.125194 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:56:48.133030 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136789 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136880 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.178238 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:56:48.186394 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.194429 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:56:48.203481 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207582 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207655 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.249053 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:56:48.257115 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.265010 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:56:48.272913 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276751 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276818 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.318199 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:56:48.326277 1055021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:56:48.330322 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:56:48.371576 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:56:48.412414 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:56:48.454546 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:56:48.499800 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:56:48.544265 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:56:48.590374 1055021 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:48.590473 1055021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:56:48.590547 1055021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:56:48.619202 1055021 cri.go:89] found id: ""
	I1208 01:56:48.619330 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:56:48.627096 1055021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:56:48.627120 1055021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:56:48.627172 1055021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:56:48.634458 1055021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:56:48.635058 1055021 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.635319 1055021 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-448023" cluster setting kubeconfig missing "newest-cni-448023" context setting]
	I1208 01:56:48.635800 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.637157 1055021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:56:48.644838 1055021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:56:48.644913 1055021 kubeadm.go:602] duration metric: took 17.785882ms to restartPrimaryControlPlane
	I1208 01:56:48.644930 1055021 kubeadm.go:403] duration metric: took 54.567759ms to StartCluster
	I1208 01:56:48.644947 1055021 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.645007 1055021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.645870 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.646084 1055021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:56:48.646389 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:48.646439 1055021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:56:48.646504 1055021 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-448023"
	I1208 01:56:48.646529 1055021 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-448023"
	I1208 01:56:48.646555 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647285 1055021 addons.go:70] Setting dashboard=true in profile "newest-cni-448023"
	I1208 01:56:48.647305 1055021 addons.go:239] Setting addon dashboard=true in "newest-cni-448023"
	W1208 01:56:48.647311 1055021 addons.go:248] addon dashboard should already be in state true
	I1208 01:56:48.647331 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.647957 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.648448 1055021 addons.go:70] Setting default-storageclass=true in profile "newest-cni-448023"
	I1208 01:56:48.648476 1055021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-448023"
	I1208 01:56:48.648734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.651945 1055021 out.go:179] * Verifying Kubernetes components...
	I1208 01:56:48.654867 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:48.684864 1055021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:56:48.691009 1055021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:56:48.694226 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:56:48.694251 1055021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:56:48.694323 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.695436 1055021 addons.go:239] Setting addon default-storageclass=true in "newest-cni-448023"
	I1208 01:56:48.695482 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.695884 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.701699 1055021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1208 01:56:47.019431 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:49.518464 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:48.704558 1055021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.704591 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:56:48.704655 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.736846 1055021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.736869 1055021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:56:48.736936 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.742543 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.766983 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.785430 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.885046 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:48.955470 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:56:48.955498 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:56:48.963459 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.965887 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.978338 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:56:48.978366 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:56:49.016188 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:56:49.016210 1055021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:56:49.061303 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:56:49.061328 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:56:49.074921 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:56:49.074987 1055021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:56:49.087412 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:56:49.087487 1055021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:56:49.099641 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:56:49.099667 1055021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:56:49.112487 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:56:49.112550 1055021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:56:49.125264 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.125288 1055021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:56:49.138335 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.508759 1055021 api_server.go:52] waiting for apiserver process to appear ...
	W1208 01:56:49.508918 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509385 1055021 retry.go:31] will retry after 199.05184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509006 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509406 1055021 retry.go:31] will retry after 322.784094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509263 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509418 1055021 retry.go:31] will retry after 353.691521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509538 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:49.709327 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:49.771304 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.771383 1055021 retry.go:31] will retry after 463.845922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.832454 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:49.863948 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:49.893225 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.893260 1055021 retry.go:31] will retry after 412.627767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.933504 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.933538 1055021 retry.go:31] will retry after 461.252989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.009945 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.235907 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:50.306466 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:50.322038 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.322071 1055021 retry.go:31] will retry after 523.830022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:50.380008 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.380051 1055021 retry.go:31] will retry after 753.154513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.395255 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:50.456642 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.456676 1055021 retry.go:31] will retry after 803.433098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.509737 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.846838 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:50.908365 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.908408 1055021 retry.go:31] will retry after 671.521026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.519391 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:54.018689 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:51.009996 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.134042 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.192423 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.192455 1055021 retry.go:31] will retry after 689.227768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.260665 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:51.319134 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.319182 1055021 retry.go:31] will retry after 541.526321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.509442 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.580384 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:51.640452 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.640485 1055021 retry.go:31] will retry after 844.977075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.861863 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:51.882351 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.944280 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.944321 1055021 retry.go:31] will retry after 1.000499188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.967122 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.967155 1055021 retry.go:31] will retry after 859.890122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.010305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:52.486447 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:52.510056 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:56:52.585753 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.585816 1055021 retry.go:31] will retry after 1.004705222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.828167 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:52.886091 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.886122 1055021 retry.go:31] will retry after 2.82316744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.945292 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:53.006627 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.006710 1055021 retry.go:31] will retry after 2.04955933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.009824 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.510073 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.591501 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:53.650678 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.650712 1055021 retry.go:31] will retry after 3.502569911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:54.010159 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:54.509667 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.009590 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.057336 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:55.132269 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.132307 1055021 retry.go:31] will retry after 2.513983979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.509439 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.710171 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:55.769058 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.769091 1055021 retry.go:31] will retry after 2.669645777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:56.518414 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:58.518521 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:59.018412 1047159 node_ready.go:38] duration metric: took 6m0.000405007s for node "no-preload-389831" to be "Ready" ...
	I1208 01:56:59.026905 1047159 out.go:203] 
	W1208 01:56:59.029838 1047159 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:56:59.029857 1047159 out.go:285] * 
	W1208 01:56:59.032175 1047159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:56:59.035425 1047159 out.go:203] 
	I1208 01:56:56.009694 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.509523 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.010140 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.153585 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:57.218181 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.218214 1055021 retry.go:31] will retry after 3.909169329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.647096 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:57.710136 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.710169 1055021 retry.go:31] will retry after 4.894098122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.009665 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:58.439443 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:58.505497 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.505529 1055021 retry.go:31] will retry after 6.007342944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.009469 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.510388 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.015300 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.509494 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.010257 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.128215 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:01.190419 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.190453 1055021 retry.go:31] will retry after 9.504933562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.509623 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.009676 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.509462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.605116 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:02.675800 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:02.675835 1055021 retry.go:31] will retry after 6.984717516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:03.009407 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:03.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.015233 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.509531 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.514060 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:04.574188 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:04.574220 1055021 retry.go:31] will retry after 6.522846226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:05.012398 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.509759 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.010229 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.509419 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.009462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.510275 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.010363 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.010036 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.509454 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.661163 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:09.722054 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:09.722085 1055021 retry.go:31] will retry after 5.465119302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.010374 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.510222 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.696134 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:10.771084 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.771123 1055021 retry.go:31] will retry after 11.695285792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.009829 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.098157 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:11.159270 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.159302 1055021 retry.go:31] will retry after 8.417822009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.509651 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.010126 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.009464 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.510317 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.009529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.510393 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.009573 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.188355 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:15.251108 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.251147 1055021 retry.go:31] will retry after 12.201311078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.509570 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.009635 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.009802 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.510253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.509509 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.009459 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.509684 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.577986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:19.638356 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:19.638389 1055021 retry.go:31] will retry after 8.001395588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:20.012301 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.509725 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.010367 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.509456 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.009599 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.467388 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:57:22.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:22.532031 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:22.532062 1055021 retry.go:31] will retry after 11.135828112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:23.009468 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.509432 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.010095 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.510255 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.012400 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.010403 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.452716 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:27.510223 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:27.519149 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.519184 1055021 retry.go:31] will retry after 13.452567778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.640862 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:27.703487 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.703522 1055021 retry.go:31] will retry after 26.167048463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:28.009930 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:28.509594 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.009708 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.510396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.009745 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.010280 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.010087 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.509477 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.010351 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.509804 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.668898 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:33.729185 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:33.729219 1055021 retry.go:31] will retry after 25.894597219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:34.009473 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:34.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.010355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.010451 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.509505 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.009541 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.509700 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.014196 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.509592 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.010217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.510250 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.015373 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.510349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.972256 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:41.009839 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:41.066333 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.066366 1055021 retry.go:31] will retry after 34.953666856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.509748 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.009596 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.509438 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.009956 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.510378 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.009680 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.012784 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.510247 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.010335 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.509529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.009480 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.509657 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.009556 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.509689 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.009367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.009459 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.046711 1055021 cri.go:89] found id: ""
	I1208 01:57:49.046741 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.046749 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.046756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.046829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.086414 1055021 cri.go:89] found id: ""
	I1208 01:57:49.086435 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.086443 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.086449 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.086517 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:49.111234 1055021 cri.go:89] found id: ""
	I1208 01:57:49.111256 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.111264 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:49.111270 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:49.111328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:49.135868 1055021 cri.go:89] found id: ""
	I1208 01:57:49.135890 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.135899 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:49.135905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:49.135966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:49.161459 1055021 cri.go:89] found id: ""
	I1208 01:57:49.161482 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.161490 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:49.161496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:49.161557 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:49.186397 1055021 cri.go:89] found id: ""
	I1208 01:57:49.186421 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.186430 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:49.186436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:49.186542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:49.213171 1055021 cri.go:89] found id: ""
	I1208 01:57:49.213192 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.213201 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:49.213207 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:49.213265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:49.239381 1055021 cri.go:89] found id: ""
	I1208 01:57:49.239451 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.239484 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:49.239500 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:49.239512 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:49.311423 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:49.311459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:49.331846 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:49.331876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:49.396868 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:49.396933 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:49.396954 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:49.425376 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:49.425412 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:51.956807 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:51.967366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:51.967435 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:51.995332 1055021 cri.go:89] found id: ""
	I1208 01:57:51.995356 1055021 logs.go:282] 0 containers: []
	W1208 01:57:51.995364 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:51.995371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:51.995429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.032087 1055021 cri.go:89] found id: ""
	I1208 01:57:52.032112 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.032121 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.032128 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.032190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.066375 1055021 cri.go:89] found id: ""
	I1208 01:57:52.066403 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.066412 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.066420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.066490 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:52.098263 1055021 cri.go:89] found id: ""
	I1208 01:57:52.098291 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.098300 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:52.098306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:52.098376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:52.125642 1055021 cri.go:89] found id: ""
	I1208 01:57:52.125672 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.125681 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:52.125688 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:52.125750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:52.155324 1055021 cri.go:89] found id: ""
	I1208 01:57:52.155348 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.155356 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:52.155363 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:52.155424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:52.180558 1055021 cri.go:89] found id: ""
	I1208 01:57:52.180625 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.180647 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:52.180659 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:52.180742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:52.209892 1055021 cri.go:89] found id: ""
	I1208 01:57:52.209921 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.209930 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:52.209940 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:52.209951 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:52.237887 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:52.237925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:52.279083 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:52.279113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:52.360508 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:52.360547 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:52.379387 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:52.379417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:52.443498 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.871074 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:53.931966 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:53.931998 1055021 retry.go:31] will retry after 33.054913046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:54.943790 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:54.955406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:54.955477 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:54.980272 1055021 cri.go:89] found id: ""
	I1208 01:57:54.980295 1055021 logs.go:282] 0 containers: []
	W1208 01:57:54.980303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:54.980310 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:54.980377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.016873 1055021 cri.go:89] found id: ""
	I1208 01:57:55.016950 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.016973 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.016992 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.017116 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.055884 1055021 cri.go:89] found id: ""
	I1208 01:57:55.055905 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.055914 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.055920 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.055979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.085540 1055021 cri.go:89] found id: ""
	I1208 01:57:55.085561 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.085569 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.085576 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.085641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.111356 1055021 cri.go:89] found id: ""
	I1208 01:57:55.111378 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.111386 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.111393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.111473 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:55.137620 1055021 cri.go:89] found id: ""
	I1208 01:57:55.137643 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.137651 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:55.137657 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:55.137717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:55.162561 1055021 cri.go:89] found id: ""
	I1208 01:57:55.162626 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.162650 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:55.162667 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:55.162751 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:55.188593 1055021 cri.go:89] found id: ""
	I1208 01:57:55.188658 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.188683 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:55.188697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:55.188744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:55.254035 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:55.254057 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:55.254081 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:55.286453 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:55.286528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:55.320738 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:55.320762 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.387748 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:55.387783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:57.905905 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:57.918662 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:57.918736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:57.946026 1055021 cri.go:89] found id: ""
	I1208 01:57:57.946049 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.946058 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:57.946065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:57.946124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:57.971642 1055021 cri.go:89] found id: ""
	I1208 01:57:57.971669 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.971678 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:57.971685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:57.971744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.007407 1055021 cri.go:89] found id: ""
	I1208 01:57:58.007432 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.007441 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.007447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.007523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.050421 1055021 cri.go:89] found id: ""
	I1208 01:57:58.050442 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.050450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.050457 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.050518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.083694 1055021 cri.go:89] found id: ""
	I1208 01:57:58.083719 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.083728 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.083741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.083800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.110828 1055021 cri.go:89] found id: ""
	I1208 01:57:58.110874 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.110882 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.110899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.110974 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.136277 1055021 cri.go:89] found id: ""
	I1208 01:57:58.136302 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.136310 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.136317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.136378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:58.162168 1055021 cri.go:89] found id: ""
	I1208 01:57:58.162234 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.162258 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:58.162280 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:58.162304 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.191089 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:58.191121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:58.262015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:58.262058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:58.282086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:58.282121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:58.355880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:58.355910 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:58.355926 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:59.624913 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:59.684883 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:59.684920 1055021 retry.go:31] will retry after 39.668120724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:00.884752 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:00.909814 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:00.909896 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:00.936313 1055021 cri.go:89] found id: ""
	I1208 01:58:00.936344 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.936353 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:00.936360 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:00.936420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:00.966288 1055021 cri.go:89] found id: ""
	I1208 01:58:00.966355 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.966376 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:00.966394 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:00.966483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:00.992494 1055021 cri.go:89] found id: ""
	I1208 01:58:00.992526 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.992536 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:00.992543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:00.992608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.026941 1055021 cri.go:89] found id: ""
	I1208 01:58:01.026969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.026979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.026985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.027057 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.058196 1055021 cri.go:89] found id: ""
	I1208 01:58:01.058224 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.058233 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.058239 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.058301 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.086997 1055021 cri.go:89] found id: ""
	I1208 01:58:01.087025 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.087034 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.087042 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.087124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.113372 1055021 cri.go:89] found id: ""
	I1208 01:58:01.113401 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.113411 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.113417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.113480 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.140687 1055021 cri.go:89] found id: ""
	I1208 01:58:01.140717 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.140726 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.140736 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.140747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:01.211011 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:01.211061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:01.229916 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:01.229948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:01.319423 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:01.319443 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:01.319455 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:01.349176 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:01.349213 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:03.883281 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:03.894087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:03.894159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:03.919271 1055021 cri.go:89] found id: ""
	I1208 01:58:03.919294 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.919302 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:03.919309 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:03.919367 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:03.944356 1055021 cri.go:89] found id: ""
	I1208 01:58:03.944379 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.944387 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:03.944393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:03.944456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:03.969863 1055021 cri.go:89] found id: ""
	I1208 01:58:03.969890 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.969900 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:03.969907 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:03.969981 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:03.995306 1055021 cri.go:89] found id: ""
	I1208 01:58:03.995328 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.995336 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:03.995344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:03.995402 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.037050 1055021 cri.go:89] found id: ""
	I1208 01:58:04.037079 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.037089 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.037096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.037159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.081029 1055021 cri.go:89] found id: ""
	I1208 01:58:04.081057 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.081066 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.081073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.081139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.111984 1055021 cri.go:89] found id: ""
	I1208 01:58:04.112005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.112013 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.112020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.112079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.140750 1055021 cri.go:89] found id: ""
	I1208 01:58:04.140776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.140784 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.140793 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.140805 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.207146 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.207183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:04.225030 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:04.225061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:04.295674 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:04.295696 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:04.295708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:04.326962 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.327003 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:06.859119 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:06.871159 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:06.871236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:06.901570 1055021 cri.go:89] found id: ""
	I1208 01:58:06.901594 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.901603 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:06.901618 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:06.901681 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:06.930193 1055021 cri.go:89] found id: ""
	I1208 01:58:06.930220 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.930229 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:06.930235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:06.930298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:06.955159 1055021 cri.go:89] found id: ""
	I1208 01:58:06.955188 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.955197 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:06.955205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:06.955278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:06.980007 1055021 cri.go:89] found id: ""
	I1208 01:58:06.980031 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.980040 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:06.980046 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:06.980103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.017391 1055021 cri.go:89] found id: ""
	I1208 01:58:07.017417 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.017425 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.017432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.017495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.048550 1055021 cri.go:89] found id: ""
	I1208 01:58:07.048577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.048586 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.048596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.048659 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.080691 1055021 cri.go:89] found id: ""
	I1208 01:58:07.080759 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.080783 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.080796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.080874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.105849 1055021 cri.go:89] found id: ""
	I1208 01:58:07.105925 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.105948 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.105971 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:07.106012 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:07.138653 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.138732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.206905 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.206940 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.224653 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.224683 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.303888 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.303912 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:07.303925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:09.834549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:09.845152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:09.845227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:09.870225 1055021 cri.go:89] found id: ""
	I1208 01:58:09.870251 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.870259 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:09.870268 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:09.870330 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:09.896168 1055021 cri.go:89] found id: ""
	I1208 01:58:09.896191 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.896200 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:09.896206 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:09.896269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:09.922117 1055021 cri.go:89] found id: ""
	I1208 01:58:09.922140 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.922149 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:09.922155 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:09.922215 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:09.947105 1055021 cri.go:89] found id: ""
	I1208 01:58:09.947129 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.947137 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:09.947143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:09.947236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:09.972509 1055021 cri.go:89] found id: ""
	I1208 01:58:09.972535 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.972544 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:09.972551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:09.972609 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.009065 1055021 cri.go:89] found id: ""
	I1208 01:58:10.009097 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.009107 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.009115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.009196 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.052170 1055021 cri.go:89] found id: ""
	I1208 01:58:10.052197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.052206 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.052212 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.052278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.078447 1055021 cri.go:89] found id: ""
	I1208 01:58:10.078472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.078480 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.078489 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:10.078500 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:10.109259 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.109300 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.138226 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.138251 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.204388 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.204424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.222357 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.222398 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.305027 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:12.805305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:12.815949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:12.816024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:12.840507 1055021 cri.go:89] found id: ""
	I1208 01:58:12.840531 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.840540 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:12.840546 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:12.840614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:12.865555 1055021 cri.go:89] found id: ""
	I1208 01:58:12.865580 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.865589 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:12.865595 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:12.865653 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:12.890286 1055021 cri.go:89] found id: ""
	I1208 01:58:12.890311 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.890319 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:12.890325 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:12.890383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:12.915193 1055021 cri.go:89] found id: ""
	I1208 01:58:12.915217 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.915226 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:12.915233 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:12.915291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:12.940889 1055021 cri.go:89] found id: ""
	I1208 01:58:12.940915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.940923 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:12.940931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:12.941011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:12.967233 1055021 cri.go:89] found id: ""
	I1208 01:58:12.967259 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.967268 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:12.967275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:12.967337 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:12.990975 1055021 cri.go:89] found id: ""
	I1208 01:58:12.991001 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.991009 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:12.991016 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:12.991088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.025590 1055021 cri.go:89] found id: ""
	I1208 01:58:13.025616 1055021 logs.go:282] 0 containers: []
	W1208 01:58:13.025625 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.025634 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.025646 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.063362 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.063391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.134922 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.134959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.153025 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.153060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.215226 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.215246 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:13.215258 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:15.744740 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:15.755312 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:15.755383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:15.780891 1055021 cri.go:89] found id: ""
	I1208 01:58:15.780915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.780923 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:15.780930 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:15.780989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:15.806161 1055021 cri.go:89] found id: ""
	I1208 01:58:15.806185 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.806194 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:15.806200 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:15.806257 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:15.831178 1055021 cri.go:89] found id: ""
	I1208 01:58:15.831197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.831205 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:15.831211 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:15.831269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:15.856130 1055021 cri.go:89] found id: ""
	I1208 01:58:15.856155 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.856164 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:15.856171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:15.856232 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:15.885064 1055021 cri.go:89] found id: ""
	I1208 01:58:15.885136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.885159 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:15.885177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:15.885270 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:15.912595 1055021 cri.go:89] found id: ""
	I1208 01:58:15.912623 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.912631 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:15.912638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:15.912700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:15.936650 1055021 cri.go:89] found id: ""
	I1208 01:58:15.936677 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.936686 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:15.936692 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:15.936752 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:15.962329 1055021 cri.go:89] found id: ""
	I1208 01:58:15.962350 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.962358 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:15.962367 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:15.962378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 01:58:16.020986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:16.067660 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.067744 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:16.067772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1208 01:58:16.112099 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.112132 1055021 retry.go:31] will retry after 29.72360839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.126560 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.126615 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.157854 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.157883 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.223999 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.224035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:18.742355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:18.752998 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:18.753077 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:18.778077 1055021 cri.go:89] found id: ""
	I1208 01:58:18.778099 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.778107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:18.778114 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:18.778171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:18.802643 1055021 cri.go:89] found id: ""
	I1208 01:58:18.802665 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.802673 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:18.802679 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:18.802736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:18.827413 1055021 cri.go:89] found id: ""
	I1208 01:58:18.827441 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.827450 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:18.827456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:18.827514 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:18.852593 1055021 cri.go:89] found id: ""
	I1208 01:58:18.852618 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.852627 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:18.852634 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:18.852694 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:18.877850 1055021 cri.go:89] found id: ""
	I1208 01:58:18.877876 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.877884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:18.877891 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:18.877949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:18.906907 1055021 cri.go:89] found id: ""
	I1208 01:58:18.906930 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.906938 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:18.906945 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:18.907007 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:18.932699 1055021 cri.go:89] found id: ""
	I1208 01:58:18.932723 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.932733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:18.932739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:18.932802 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:18.958426 1055021 cri.go:89] found id: ""
	I1208 01:58:18.958448 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.958456 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:18.958465 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:18.958476 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.023824 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.023904 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.043811 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.043946 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.116236 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.116259 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:19.116273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:19.145950 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.145986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:21.678015 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:21.689017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:21.689107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:21.714453 1055021 cri.go:89] found id: ""
	I1208 01:58:21.714513 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.714522 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:21.714529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:21.714590 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:21.738662 1055021 cri.go:89] found id: ""
	I1208 01:58:21.738688 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.738697 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:21.738703 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:21.738765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:21.763648 1055021 cri.go:89] found id: ""
	I1208 01:58:21.763684 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.763693 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:21.763700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:21.763768 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:21.789120 1055021 cri.go:89] found id: ""
	I1208 01:58:21.789142 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.789150 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:21.789156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:21.789212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:21.814445 1055021 cri.go:89] found id: ""
	I1208 01:58:21.814466 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.814474 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:21.814480 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:21.814538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:21.843027 1055021 cri.go:89] found id: ""
	I1208 01:58:21.843061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.843070 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:21.843078 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:21.843139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:21.872604 1055021 cri.go:89] found id: ""
	I1208 01:58:21.872632 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.872640 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:21.872647 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:21.872725 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:21.898190 1055021 cri.go:89] found id: ""
	I1208 01:58:21.898225 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.898233 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:21.898258 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:21.898274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:21.963735 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:21.963774 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:21.981549 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:21.981580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.065337 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.065359 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:22.065373 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:22.096383 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.096419 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:24.626630 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:24.637406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:24.637484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:24.662982 1055021 cri.go:89] found id: ""
	I1208 01:58:24.663005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.663014 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:24.663020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:24.663088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:24.687863 1055021 cri.go:89] found id: ""
	I1208 01:58:24.687887 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.687897 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:24.687904 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:24.687965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:24.713087 1055021 cri.go:89] found id: ""
	I1208 01:58:24.713110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.713119 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:24.713125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:24.713185 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:24.738346 1055021 cri.go:89] found id: ""
	I1208 01:58:24.738369 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.738378 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:24.738385 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:24.738451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:24.764281 1055021 cri.go:89] found id: ""
	I1208 01:58:24.764309 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.764317 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:24.764323 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:24.764382 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:24.788244 1055021 cri.go:89] found id: ""
	I1208 01:58:24.788267 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.788276 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:24.788282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:24.788358 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:24.812521 1055021 cri.go:89] found id: ""
	I1208 01:58:24.812544 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.812553 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:24.812559 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:24.812620 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:24.837747 1055021 cri.go:89] found id: ""
	I1208 01:58:24.837772 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.837781 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:24.837790 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:24.837804 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:24.903152 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:24.903189 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:24.920792 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:24.920824 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:24.987709 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:24.987780 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:24.987806 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:25.019693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.019773 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:26.987306 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:58:27.057603 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:27.057721 1055021 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:27.560847 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:27.570936 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:27.571004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:27.595473 1055021 cri.go:89] found id: ""
	I1208 01:58:27.595497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.595505 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:27.595512 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:27.595577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:27.620674 1055021 cri.go:89] found id: ""
	I1208 01:58:27.620696 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.620704 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:27.620710 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:27.620766 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:27.646168 1055021 cri.go:89] found id: ""
	I1208 01:58:27.646192 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.646202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:27.646208 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:27.646283 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:27.671472 1055021 cri.go:89] found id: ""
	I1208 01:58:27.671549 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.671564 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:27.671572 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:27.671632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:27.699385 1055021 cri.go:89] found id: ""
	I1208 01:58:27.699409 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.699417 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:27.699423 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:27.699492 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:27.726912 1055021 cri.go:89] found id: ""
	I1208 01:58:27.726937 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.726946 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:27.726953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:27.727011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:27.752037 1055021 cri.go:89] found id: ""
	I1208 01:58:27.752061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.752070 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:27.752076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:27.752139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:27.777018 1055021 cri.go:89] found id: ""
	I1208 01:58:27.777081 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.777097 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:27.777106 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:27.777119 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:27.845091 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:27.845115 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:27.845129 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:27.873750 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:27.873794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:27.906540 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:27.906569 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:27.986314 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:27.986360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.504860 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:30.520332 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:30.520426 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:30.558545 1055021 cri.go:89] found id: ""
	I1208 01:58:30.558574 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.558589 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:30.558596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:30.558670 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:30.587958 1055021 cri.go:89] found id: ""
	I1208 01:58:30.587979 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.587988 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:30.587994 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:30.588055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:30.613947 1055021 cri.go:89] found id: ""
	I1208 01:58:30.613969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.613977 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:30.613983 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:30.614048 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:30.639872 1055021 cri.go:89] found id: ""
	I1208 01:58:30.639899 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.639908 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:30.639916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:30.639975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:30.664766 1055021 cri.go:89] found id: ""
	I1208 01:58:30.664789 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.664797 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:30.664804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:30.664862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:30.694045 1055021 cri.go:89] found id: ""
	I1208 01:58:30.694110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.694130 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:30.694149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:30.694238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:30.719821 1055021 cri.go:89] found id: ""
	I1208 01:58:30.719843 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.719851 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:30.719857 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:30.719915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:30.745151 1055021 cri.go:89] found id: ""
	I1208 01:58:30.745176 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.745185 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:30.745194 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:30.745206 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:30.808884 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:30.808918 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.826624 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:30.826650 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:30.895279 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:30.895304 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:30.895317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:30.927429 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:30.927478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:33.458304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:33.468970 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:33.469040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:33.493566 1055021 cri.go:89] found id: ""
	I1208 01:58:33.493592 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.493601 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:33.493608 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:33.493669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:33.526608 1055021 cri.go:89] found id: ""
	I1208 01:58:33.526630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.526638 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:33.526644 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:33.526705 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:33.560265 1055021 cri.go:89] found id: ""
	I1208 01:58:33.560287 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.560295 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:33.560301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:33.560376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:33.588803 1055021 cri.go:89] found id: ""
	I1208 01:58:33.588830 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.588839 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:33.588846 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:33.588908 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:33.614585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.614610 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.614619 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:33.614625 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:33.614684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:33.638894 1055021 cri.go:89] found id: ""
	I1208 01:58:33.638917 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.638926 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:33.638933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:33.638991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:33.664714 1055021 cri.go:89] found id: ""
	I1208 01:58:33.664736 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.664744 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:33.664752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:33.664814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:33.689585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.689611 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.689620 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:33.689629 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:33.689641 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:33.753906 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:33.753942 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:33.771754 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:33.771783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:33.841023 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:33.841047 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:33.841060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:33.868853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:33.868891 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.397728 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.410372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.410443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:36.441015 1055021 cri.go:89] found id: ""
	I1208 01:58:36.441041 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.441049 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:36.441055 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:36.441117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:36.466353 1055021 cri.go:89] found id: ""
	I1208 01:58:36.466386 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.466395 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:36.466401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:36.466463 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:36.491643 1055021 cri.go:89] found id: ""
	I1208 01:58:36.491670 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.491679 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:36.491685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:36.491743 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:36.531444 1055021 cri.go:89] found id: ""
	I1208 01:58:36.531472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.531480 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:36.531487 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:36.531551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:36.561863 1055021 cri.go:89] found id: ""
	I1208 01:58:36.561891 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.561900 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:36.561906 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:36.561965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:36.598817 1055021 cri.go:89] found id: ""
	I1208 01:58:36.598868 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.598877 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:36.598884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:36.598953 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:36.625352 1055021 cri.go:89] found id: ""
	I1208 01:58:36.625392 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.625402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:36.625408 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:36.625478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:36.649929 1055021 cri.go:89] found id: ""
	I1208 01:58:36.649961 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.649969 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:36.649979 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:36.649991 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:36.717242 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:36.717272 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:36.717284 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:36.745340 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:36.745375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.772396 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:36.772423 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:36.840336 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:36.840375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.353819 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:58:39.359310 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:58:39.415165 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:39.415265 1055021 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:39.415318 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.415380 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.440780 1055021 cri.go:89] found id: ""
	I1208 01:58:39.440802 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.440817 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.440824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.440883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:39.469267 1055021 cri.go:89] found id: ""
	I1208 01:58:39.469293 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.469302 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:39.469308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:39.469369 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:39.497131 1055021 cri.go:89] found id: ""
	I1208 01:58:39.497154 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.497162 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:39.497171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:39.497229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:39.533641 1055021 cri.go:89] found id: ""
	I1208 01:58:39.533666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.533675 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:39.533683 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:39.533741 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:39.569861 1055021 cri.go:89] found id: ""
	I1208 01:58:39.569884 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.569893 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:39.569900 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:39.569959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:39.598670 1055021 cri.go:89] found id: ""
	I1208 01:58:39.598694 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.598702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:39.598709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:39.598770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:39.623360 1055021 cri.go:89] found id: ""
	I1208 01:58:39.623384 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.623392 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:39.623398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:39.623464 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:39.647840 1055021 cri.go:89] found id: ""
	I1208 01:58:39.647864 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.647873 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:39.647881 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:39.647893 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:39.711466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:39.711505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.728921 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:39.728950 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:39.792077 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:39.792097 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:39.792111 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:39.819026 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:39.819064 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.348228 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.359751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.359835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.385781 1055021 cri.go:89] found id: ""
	I1208 01:58:42.385808 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.385818 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.385824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.385884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:42.412513 1055021 cri.go:89] found id: ""
	I1208 01:58:42.412540 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.412555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:42.412562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:42.412621 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:42.439136 1055021 cri.go:89] found id: ""
	I1208 01:58:42.439202 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.439217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:42.439223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:42.439297 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:42.468994 1055021 cri.go:89] found id: ""
	I1208 01:58:42.469069 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.469092 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:42.469105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:42.469190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:42.493446 1055021 cri.go:89] found id: ""
	I1208 01:58:42.493481 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.493489 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:42.493496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:42.493573 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:42.535705 1055021 cri.go:89] found id: ""
	I1208 01:58:42.535751 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.535760 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:42.535768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:42.535838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:42.565148 1055021 cri.go:89] found id: ""
	I1208 01:58:42.565174 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.565183 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:42.565189 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:42.565262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:42.592944 1055021 cri.go:89] found id: ""
	I1208 01:58:42.592967 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.592975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:42.592984 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:42.592995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.627360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:42.627389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:42.692577 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:42.692611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:42.710349 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:42.710378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:42.782051 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.782073 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:42.782085 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.310746 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.328999 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.329226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.355526 1055021 cri.go:89] found id: ""
	I1208 01:58:45.355554 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.355562 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.355569 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.355649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.385050 1055021 cri.go:89] found id: ""
	I1208 01:58:45.385073 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.385081 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.385087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.385146 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.409413 1055021 cri.go:89] found id: ""
	I1208 01:58:45.409438 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.409447 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.409452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.409510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:45.445870 1055021 cri.go:89] found id: ""
	I1208 01:58:45.445903 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.445912 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:45.445919 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:45.445988 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:45.473347 1055021 cri.go:89] found id: ""
	I1208 01:58:45.473382 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.473391 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:45.473397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:45.473465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:45.497721 1055021 cri.go:89] found id: ""
	I1208 01:58:45.497756 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.497765 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:45.497772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:45.497839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:45.529708 1055021 cri.go:89] found id: ""
	I1208 01:58:45.529739 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.529748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:45.529754 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:45.529829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:45.556748 1055021 cri.go:89] found id: ""
	I1208 01:58:45.556783 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.556792 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:45.556801 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:45.556812 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:45.623617 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:45.623652 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:45.642117 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:45.642151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:45.711093 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:45.711114 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:45.711127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.739133 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:45.739169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.836195 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:45.896793 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:45.896954 1055021 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:45.900444 1055021 out.go:179] * Enabled addons: 
	I1208 01:58:45.903391 1055021 addons.go:530] duration metric: took 1m57.256950319s for enable addons: enabled=[]
	I1208 01:58:48.271013 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.282344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.282467 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.314973 1055021 cri.go:89] found id: ""
	I1208 01:58:48.315046 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.315078 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.315098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.315204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.344987 1055021 cri.go:89] found id: ""
	I1208 01:58:48.345017 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.345026 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.345033 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.345094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.370650 1055021 cri.go:89] found id: ""
	I1208 01:58:48.370674 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.370681 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.370687 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.370749 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.396253 1055021 cri.go:89] found id: ""
	I1208 01:58:48.396319 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.396334 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.396341 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.396410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.425208 1055021 cri.go:89] found id: ""
	I1208 01:58:48.425235 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.425244 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.425250 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.425312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:48.455125 1055021 cri.go:89] found id: ""
	I1208 01:58:48.455150 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.455160 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:48.455177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:48.455238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:48.479964 1055021 cri.go:89] found id: ""
	I1208 01:58:48.480043 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.480059 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:48.480067 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:48.480128 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:48.506875 1055021 cri.go:89] found id: ""
	I1208 01:58:48.506902 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.506911 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:48.506920 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:48.506933 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:48.581685 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:48.581724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:48.600281 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:48.600313 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:48.663184 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:48.663203 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:48.663217 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:48.691509 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:48.691549 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.221462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.231909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.231985 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.262905 1055021 cri.go:89] found id: ""
	I1208 01:58:51.262932 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.262940 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.262946 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.263006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.293540 1055021 cri.go:89] found id: ""
	I1208 01:58:51.293567 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.293576 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.293582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.293639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.324201 1055021 cri.go:89] found id: ""
	I1208 01:58:51.324228 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.324236 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.324242 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.324298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.351933 1055021 cri.go:89] found id: ""
	I1208 01:58:51.351960 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.351974 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.351981 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.352040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.376814 1055021 cri.go:89] found id: ""
	I1208 01:58:51.376836 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.376845 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.376851 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.376909 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.401752 1055021 cri.go:89] found id: ""
	I1208 01:58:51.401776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.401785 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.401791 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.401848 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.430825 1055021 cri.go:89] found id: ""
	I1208 01:58:51.430861 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.430870 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.430876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.430938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.455641 1055021 cri.go:89] found id: ""
	I1208 01:58:51.455666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.455674 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.455684 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.455695 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:51.527696 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:51.527719 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:51.527732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:51.557037 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:51.557072 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.589759 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:51.589789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:51.655851 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.655888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:54.174903 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.185290 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.185363 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.213134 1055021 cri.go:89] found id: ""
	I1208 01:58:54.213158 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.213167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.213174 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.213234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.238420 1055021 cri.go:89] found id: ""
	I1208 01:58:54.238446 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.238455 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.238461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.238524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.272304 1055021 cri.go:89] found id: ""
	I1208 01:58:54.272331 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.272339 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.272345 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.272405 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.302582 1055021 cri.go:89] found id: ""
	I1208 01:58:54.302608 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.302617 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.302623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.302683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.331550 1055021 cri.go:89] found id: ""
	I1208 01:58:54.331577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.331585 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.331591 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.331656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.356262 1055021 cri.go:89] found id: ""
	I1208 01:58:54.356285 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.356293 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.356300 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.356364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.382019 1055021 cri.go:89] found id: ""
	I1208 01:58:54.382045 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.382054 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.382060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.382120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.407111 1055021 cri.go:89] found id: ""
	I1208 01:58:54.407136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.407145 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.407154 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.407169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.470487 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.470509 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:54.470522 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:54.498660 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:54.498697 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:54.539432 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:54.539462 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.617690 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:54.617725 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.135616 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.145801 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.145871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.170603 1055021 cri.go:89] found id: ""
	I1208 01:58:57.170629 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.170637 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.170643 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.170701 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.197272 1055021 cri.go:89] found id: ""
	I1208 01:58:57.197300 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.197309 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.197315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.197379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.226393 1055021 cri.go:89] found id: ""
	I1208 01:58:57.226420 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.226430 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.226436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.226499 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.267139 1055021 cri.go:89] found id: ""
	I1208 01:58:57.267215 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.267239 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.267257 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.267350 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.302475 1055021 cri.go:89] found id: ""
	I1208 01:58:57.302497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.302505 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.302511 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.302571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.335859 1055021 cri.go:89] found id: ""
	I1208 01:58:57.335886 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.335894 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.335901 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.335959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.360608 1055021 cri.go:89] found id: ""
	I1208 01:58:57.360630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.360639 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.360646 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.360706 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.386045 1055021 cri.go:89] found id: ""
	I1208 01:58:57.386067 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.386076 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.386084 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.386096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:57.454478 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.454515 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.472469 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.472503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.545965 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.545998 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:57.546011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:57.584922 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.584959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:00.114637 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.175958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.176042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.249754 1055021 cri.go:89] found id: ""
	I1208 01:59:00.249778 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.249788 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.249795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.249868 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.304452 1055021 cri.go:89] found id: ""
	I1208 01:59:00.304487 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.304497 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.304503 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.304576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.346364 1055021 cri.go:89] found id: ""
	I1208 01:59:00.346424 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.346434 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.346465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.346577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.377822 1055021 cri.go:89] found id: ""
	I1208 01:59:00.377852 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.377862 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.377868 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.377963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.406823 1055021 cri.go:89] found id: ""
	I1208 01:59:00.406875 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.406884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.406908 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.406992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.435875 1055021 cri.go:89] found id: ""
	I1208 01:59:00.435911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.435920 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.435942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.436025 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.463084 1055021 cri.go:89] found id: ""
	I1208 01:59:00.463117 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.463126 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.463135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.463243 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.489555 1055021 cri.go:89] found id: ""
	I1208 01:59:00.489589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.489598 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.489626 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.489645 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.562522 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.562560 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.582358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.582389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.649877 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.649899 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:00.649912 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:00.682085 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.682120 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.216065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.226430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.226503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.253068 1055021 cri.go:89] found id: ""
	I1208 01:59:03.253093 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.253102 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.253109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.253168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.282867 1055021 cri.go:89] found id: ""
	I1208 01:59:03.282894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.282903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.282910 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.282969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.320054 1055021 cri.go:89] found id: ""
	I1208 01:59:03.320080 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.320092 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.320098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.320155 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.347220 1055021 cri.go:89] found id: ""
	I1208 01:59:03.347243 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.347252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.347258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.347319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.373498 1055021 cri.go:89] found id: ""
	I1208 01:59:03.373570 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.373595 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.373613 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.373703 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.399912 1055021 cri.go:89] found id: ""
	I1208 01:59:03.399948 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.399957 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.399964 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.400023 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.425601 1055021 cri.go:89] found id: ""
	I1208 01:59:03.425625 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.425634 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.425640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.425698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.454732 1055021 cri.go:89] found id: ""
	I1208 01:59:03.454758 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.454767 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.454775 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.454789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.530461 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.530493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.549828 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.549917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.620701 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.620720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:03.620735 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:03.649018 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.649058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.177524 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.187461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.187531 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.214977 1055021 cri.go:89] found id: ""
	I1208 01:59:06.214999 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.215008 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.215015 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.215094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.238383 1055021 cri.go:89] found id: ""
	I1208 01:59:06.238493 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.238514 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.238534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.238619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.272265 1055021 cri.go:89] found id: ""
	I1208 01:59:06.272329 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.272351 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.272367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.272453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.302615 1055021 cri.go:89] found id: ""
	I1208 01:59:06.302658 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.302672 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.302678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.302750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.331427 1055021 cri.go:89] found id: ""
	I1208 01:59:06.331491 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.331512 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.331534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.331619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.356630 1055021 cri.go:89] found id: ""
	I1208 01:59:06.356711 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.356726 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.356734 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.356792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.382232 1055021 cri.go:89] found id: ""
	I1208 01:59:06.382265 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.382273 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.382279 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.382345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.409564 1055021 cri.go:89] found id: ""
	I1208 01:59:06.409598 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.409607 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.409616 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.409629 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.474483 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.474521 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:06.492236 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.492265 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.581040 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.581061 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:06.581074 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:06.609481 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.609528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:09.142358 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.152558 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.152645 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.176404 1055021 cri.go:89] found id: ""
	I1208 01:59:09.176469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.176483 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.176494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.176555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.200664 1055021 cri.go:89] found id: ""
	I1208 01:59:09.200687 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.200696 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.200702 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.200759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.227242 1055021 cri.go:89] found id: ""
	I1208 01:59:09.227266 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.227274 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.227280 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.227339 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.251746 1055021 cri.go:89] found id: ""
	I1208 01:59:09.251777 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.251786 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.251792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.251859 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.285331 1055021 cri.go:89] found id: ""
	I1208 01:59:09.285356 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.285365 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.285371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.285438 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.323377 1055021 cri.go:89] found id: ""
	I1208 01:59:09.323403 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.323411 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.323418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.323479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.348974 1055021 cri.go:89] found id: ""
	I1208 01:59:09.349042 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.349058 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.349065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.349127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.378922 1055021 cri.go:89] found id: ""
	I1208 01:59:09.378954 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.378962 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.378972 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.378983 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.444646 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.444685 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.462014 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.462050 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.537469 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.537502 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:09.537514 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:09.568427 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.568465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.103793 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.114409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.114485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.143200 1055021 cri.go:89] found id: ""
	I1208 01:59:12.143235 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.143245 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.143251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.143323 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.171946 1055021 cri.go:89] found id: ""
	I1208 01:59:12.171971 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.171979 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.171985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.172050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.196625 1055021 cri.go:89] found id: ""
	I1208 01:59:12.196651 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.196661 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.196669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.196775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.223108 1055021 cri.go:89] found id: ""
	I1208 01:59:12.223178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.223203 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.223221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.223315 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.253115 1055021 cri.go:89] found id: ""
	I1208 01:59:12.253141 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.253155 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.253173 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.253271 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.293405 1055021 cri.go:89] found id: ""
	I1208 01:59:12.293429 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.293438 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.293444 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.293512 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.323970 1055021 cri.go:89] found id: ""
	I1208 01:59:12.324002 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.324011 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.324017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.324087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.355979 1055021 cri.go:89] found id: ""
	I1208 01:59:12.356005 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.356013 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.356023 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.356035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.421458 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.421496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.440234 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.440269 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.509186 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.509214 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:12.509226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:12.541753 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.541790 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.078928 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.091792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.091882 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.118461 1055021 cri.go:89] found id: ""
	I1208 01:59:15.118482 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.118490 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.118496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.118561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.143588 1055021 cri.go:89] found id: ""
	I1208 01:59:15.143612 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.143621 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.143627 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.143687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.174121 1055021 cri.go:89] found id: ""
	I1208 01:59:15.174149 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.174158 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.174164 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.174281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.202466 1055021 cri.go:89] found id: ""
	I1208 01:59:15.202489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.202498 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.202504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.202563 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.229640 1055021 cri.go:89] found id: ""
	I1208 01:59:15.229663 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.229672 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.229678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.229737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.259982 1055021 cri.go:89] found id: ""
	I1208 01:59:15.260013 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.260021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.260027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.260085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.299510 1055021 cri.go:89] found id: ""
	I1208 01:59:15.299535 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.299544 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.299551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.299639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.327621 1055021 cri.go:89] found id: ""
	I1208 01:59:15.327655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.327664 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.327673 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.327684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:15.394588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.394632 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.412251 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.412283 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.478739 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.478760 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:15.478772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:15.507201 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.507279 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.049265 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.060577 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.060652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.087023 1055021 cri.go:89] found id: ""
	I1208 01:59:18.087050 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.087066 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.087073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.087132 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.115800 1055021 cri.go:89] found id: ""
	I1208 01:59:18.115826 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.115835 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.115841 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.115901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.145764 1055021 cri.go:89] found id: ""
	I1208 01:59:18.145787 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.145797 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.145803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.145862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.174947 1055021 cri.go:89] found id: ""
	I1208 01:59:18.174974 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.174983 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.174990 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.175050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.200824 1055021 cri.go:89] found id: ""
	I1208 01:59:18.200847 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.200857 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.200863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.200935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.229145 1055021 cri.go:89] found id: ""
	I1208 01:59:18.229168 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.229176 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.229185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.229246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.266059 1055021 cri.go:89] found id: ""
	I1208 01:59:18.266083 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.266092 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.266098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.266159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.293538 1055021 cri.go:89] found id: ""
	I1208 01:59:18.293605 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.293630 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.293657 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.293682 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.366543 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.366580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.387334 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.387367 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.457441 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:18.457480 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:18.457496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:18.486126 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.486159 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:21.020889 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.031877 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.031948 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.061454 1055021 cri.go:89] found id: ""
	I1208 01:59:21.061480 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.061489 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.061496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.061561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.086273 1055021 cri.go:89] found id: ""
	I1208 01:59:21.086300 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.086308 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.086315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.086373 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.112614 1055021 cri.go:89] found id: ""
	I1208 01:59:21.112637 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.112646 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.112652 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.112710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.142489 1055021 cri.go:89] found id: ""
	I1208 01:59:21.142511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.142521 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.142527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.142584 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.167579 1055021 cri.go:89] found id: ""
	I1208 01:59:21.167602 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.167618 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.167624 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.167683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.192114 1055021 cri.go:89] found id: ""
	I1208 01:59:21.192178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.192194 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.192202 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.192266 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.216638 1055021 cri.go:89] found id: ""
	I1208 01:59:21.216660 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.216669 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.216681 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.216739 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.241924 1055021 cri.go:89] found id: ""
	I1208 01:59:21.241956 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.241965 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.241989 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.242005 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.320443 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.320516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.339967 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.340098 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.405503 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.405526 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:21.405540 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:21.433479 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.433513 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:23.960720 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:23.971271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:23.971346 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:23.996003 1055021 cri.go:89] found id: ""
	I1208 01:59:23.996028 1055021 logs.go:282] 0 containers: []
	W1208 01:59:23.996037 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:23.996044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:23.996111 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.024119 1055021 cri.go:89] found id: ""
	I1208 01:59:24.024146 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.024154 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.024160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.024239 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.051095 1055021 cri.go:89] found id: ""
	I1208 01:59:24.051179 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.051202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.051217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.051298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.076451 1055021 cri.go:89] found id: ""
	I1208 01:59:24.076477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.076486 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.076493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.076577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.105499 1055021 cri.go:89] found id: ""
	I1208 01:59:24.105527 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.105537 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.105543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.105656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.136713 1055021 cri.go:89] found id: ""
	I1208 01:59:24.136736 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.136744 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.136751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.136836 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.165410 1055021 cri.go:89] found id: ""
	I1208 01:59:24.165442 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.165453 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.165460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.165541 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.194981 1055021 cri.go:89] found id: ""
	I1208 01:59:24.195018 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.195028 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.195037 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.195049 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.260506 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.260541 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.281317 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.281351 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.350532 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.350562 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:24.350574 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:24.378730 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.378760 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.906964 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.918049 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.918151 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:26.944808 1055021 cri.go:89] found id: ""
	I1208 01:59:26.944832 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.944840 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:26.944863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:26.944936 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:26.969519 1055021 cri.go:89] found id: ""
	I1208 01:59:26.969552 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.969561 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:26.969583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:26.969664 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:26.997687 1055021 cri.go:89] found id: ""
	I1208 01:59:26.997721 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.997730 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:26.997736 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:26.997835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.029005 1055021 cri.go:89] found id: ""
	I1208 01:59:27.029029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.029037 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.029044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.029121 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.052964 1055021 cri.go:89] found id: ""
	I1208 01:59:27.052989 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.053006 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.053027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.053114 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.081309 1055021 cri.go:89] found id: ""
	I1208 01:59:27.081342 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.081352 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.081375 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.081454 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.105197 1055021 cri.go:89] found id: ""
	I1208 01:59:27.105230 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.105239 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.105245 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.105311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.129963 1055021 cri.go:89] found id: ""
	I1208 01:59:27.129994 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.130003 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.130012 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:27.130023 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:27.157821 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.157853 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:27.187177 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.187201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.257425 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.257459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.284073 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.284112 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.365290 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:29.866080 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.876623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.876700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.905223 1055021 cri.go:89] found id: ""
	I1208 01:59:29.905247 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.905257 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.905264 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.905328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:29.935886 1055021 cri.go:89] found id: ""
	I1208 01:59:29.935911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.935920 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:29.935928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:29.935989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:29.961459 1055021 cri.go:89] found id: ""
	I1208 01:59:29.961489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.961499 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:29.961521 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:29.961588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:29.989601 1055021 cri.go:89] found id: ""
	I1208 01:59:29.989666 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.989691 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:29.989709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:29.989794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.034678 1055021 cri.go:89] found id: ""
	I1208 01:59:30.034757 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.034783 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.034802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.034922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.068355 1055021 cri.go:89] found id: ""
	I1208 01:59:30.068380 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.068388 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.068395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.068456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.095676 1055021 cri.go:89] found id: ""
	I1208 01:59:30.095706 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.095717 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.095723 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.095801 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.122432 1055021 cri.go:89] found id: ""
	I1208 01:59:30.122469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.122479 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.122504 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.122543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.191149 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:30.191170 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:30.191183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:30.220413 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.220447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.258205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.258234 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.330424 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.330461 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:32.850065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.861143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.861227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.885421 1055021 cri.go:89] found id: ""
	I1208 01:59:32.885447 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.885457 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.885463 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.885524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.911689 1055021 cri.go:89] found id: ""
	I1208 01:59:32.911716 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.911726 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.911732 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.911794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:32.941141 1055021 cri.go:89] found id: ""
	I1208 01:59:32.941166 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.941175 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:32.941182 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:32.941244 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:32.970750 1055021 cri.go:89] found id: ""
	I1208 01:59:32.970771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.970779 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:32.970786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:32.970883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:32.996768 1055021 cri.go:89] found id: ""
	I1208 01:59:32.996797 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.996806 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:32.996812 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:32.996887 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.025374 1055021 cri.go:89] found id: ""
	I1208 01:59:33.025410 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.025419 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.025448 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.025547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.051845 1055021 cri.go:89] found id: ""
	I1208 01:59:33.051878 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.051888 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.051895 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.051969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.078543 1055021 cri.go:89] found id: ""
	I1208 01:59:33.078566 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.078575 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.078584 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.078597 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.096489 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.096518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.168941 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.168962 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:33.168977 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:33.197574 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.197616 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:33.226563 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.226590 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:35.798966 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.810253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:35.810325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:35.835492 1055021 cri.go:89] found id: ""
	I1208 01:59:35.835516 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.835525 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:35.835534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:35.835593 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:35.861797 1055021 cri.go:89] found id: ""
	I1208 01:59:35.861823 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.861833 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:35.861839 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:35.861901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:35.887036 1055021 cri.go:89] found id: ""
	I1208 01:59:35.887073 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.887083 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:35.887090 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:35.887159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:35.915379 1055021 cri.go:89] found id: ""
	I1208 01:59:35.915456 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.915478 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:35.915493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:35.915566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:35.940687 1055021 cri.go:89] found id: ""
	I1208 01:59:35.940714 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.940724 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:35.940730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:35.940839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:35.967960 1055021 cri.go:89] found id: ""
	I1208 01:59:35.968038 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.968060 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:35.968074 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:35.968147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:35.993884 1055021 cri.go:89] found id: ""
	I1208 01:59:35.993927 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.993936 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:35.993942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:35.994012 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:36.027031 1055021 cri.go:89] found id: ""
	I1208 01:59:36.027056 1055021 logs.go:282] 0 containers: []
	W1208 01:59:36.027074 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:36.027084 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:36.027097 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:36.092294 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:36.092315 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:36.092330 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:36.120891 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:36.120927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:36.148475 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:36.148507 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:36.216306 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:36.216344 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:38.734253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:38.744803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:38.744884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:38.777276 1055021 cri.go:89] found id: ""
	I1208 01:59:38.777305 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.777314 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:38.777320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:38.777379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:38.815858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.815894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.815903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:38.815909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:38.815979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:38.845051 1055021 cri.go:89] found id: ""
	I1208 01:59:38.845084 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.845093 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:38.845098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:38.845164 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:38.870145 1055021 cri.go:89] found id: ""
	I1208 01:59:38.870178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.870187 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:38.870193 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:38.870261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:38.897461 1055021 cri.go:89] found id: ""
	I1208 01:59:38.897489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.897498 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:38.897505 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:38.897564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:38.923327 1055021 cri.go:89] found id: ""
	I1208 01:59:38.923351 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.923360 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:38.923367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:38.923430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:38.949858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.949884 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.949893 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:38.949899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:38.949963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:38.975805 1055021 cri.go:89] found id: ""
	I1208 01:59:38.975831 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.975840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:38.975849 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:38.975861 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:39.040102 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:39.040140 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:39.057980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:39.058045 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:39.129261 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:39.129281 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:39.129297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:39.157488 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:39.157524 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:41.687952 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:41.698803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:41.698906 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:41.724062 1055021 cri.go:89] found id: ""
	I1208 01:59:41.724139 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.724171 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:41.724184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:41.724260 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:41.756674 1055021 cri.go:89] found id: ""
	I1208 01:59:41.756712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.756720 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:41.756727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:41.756797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:41.793181 1055021 cri.go:89] found id: ""
	I1208 01:59:41.793208 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.793217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:41.793223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:41.793289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:41.823566 1055021 cri.go:89] found id: ""
	I1208 01:59:41.823589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.823597 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:41.823603 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:41.823660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:41.848188 1055021 cri.go:89] found id: ""
	I1208 01:59:41.848215 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.848224 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:41.848231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:41.848289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:41.874016 1055021 cri.go:89] found id: ""
	I1208 01:59:41.874053 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.874062 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:41.874068 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:41.874144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:41.901494 1055021 cri.go:89] found id: ""
	I1208 01:59:41.901517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.901525 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:41.901531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:41.901588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:41.927897 1055021 cri.go:89] found id: ""
	I1208 01:59:41.927919 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.927928 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:41.927936 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:41.927948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:41.989449 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:41.989523 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:41.989543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:42.035690 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:42.035724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:42.065962 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:42.066011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:42.136350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:42.136460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.657754 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:44.669949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:44.670036 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:44.700311 1055021 cri.go:89] found id: ""
	I1208 01:59:44.700341 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.700352 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:44.700358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:44.700422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:44.726358 1055021 cri.go:89] found id: ""
	I1208 01:59:44.726383 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.726392 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:44.726398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:44.726461 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:44.761403 1055021 cri.go:89] found id: ""
	I1208 01:59:44.761430 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.761440 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:44.761447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:44.761503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:44.792746 1055021 cri.go:89] found id: ""
	I1208 01:59:44.792771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.792780 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:44.792786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:44.792845 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:44.822139 1055021 cri.go:89] found id: ""
	I1208 01:59:44.822170 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.822179 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:44.822185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:44.822246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:44.848969 1055021 cri.go:89] found id: ""
	I1208 01:59:44.849036 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.849051 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:44.849060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:44.849123 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:44.877689 1055021 cri.go:89] found id: ""
	I1208 01:59:44.877712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.877720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:44.877727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:44.877792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:44.905370 1055021 cri.go:89] found id: ""
	I1208 01:59:44.905394 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.905403 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:44.905412 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:44.905424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.923373 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:44.923410 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:44.995648 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:44.995670 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:44.995684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:45.028693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:45.028744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:45.080489 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:45.080534 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:47.697315 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:47.707837 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:47.707910 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:47.731910 1055021 cri.go:89] found id: ""
	I1208 01:59:47.731934 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.731943 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:47.731950 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:47.732009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:47.765844 1055021 cri.go:89] found id: ""
	I1208 01:59:47.765869 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.765887 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:47.765894 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:47.765955 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:47.805305 1055021 cri.go:89] found id: ""
	I1208 01:59:47.805328 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.805342 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:47.805349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:47.805407 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:47.832547 1055021 cri.go:89] found id: ""
	I1208 01:59:47.832572 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.832581 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:47.832587 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:47.832646 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:47.857492 1055021 cri.go:89] found id: ""
	I1208 01:59:47.857517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.857526 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:47.857533 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:47.857595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:47.885564 1055021 cri.go:89] found id: ""
	I1208 01:59:47.885591 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.885599 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:47.885606 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:47.885668 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:47.914630 1055021 cri.go:89] found id: ""
	I1208 01:59:47.914655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.914664 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:47.914671 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:47.914737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:47.944185 1055021 cri.go:89] found id: ""
	I1208 01:59:47.944216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.944226 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:47.944236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:47.944247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:47.973585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:47.973622 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:48.011189 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:48.011218 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:48.078148 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:48.078187 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:48.098135 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:48.098167 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:48.174366 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:50.674625 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:50.685161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:50.685235 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:50.712131 1055021 cri.go:89] found id: ""
	I1208 01:59:50.712158 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.712167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:50.712175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:50.712236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:50.741188 1055021 cri.go:89] found id: ""
	I1208 01:59:50.741216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.741224 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:50.741231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:50.741325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:50.778993 1055021 cri.go:89] found id: ""
	I1208 01:59:50.779016 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.779026 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:50.779034 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:50.779103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:50.820444 1055021 cri.go:89] found id: ""
	I1208 01:59:50.820477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.820487 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:50.820494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:50.820552 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:50.845727 1055021 cri.go:89] found id: ""
	I1208 01:59:50.845752 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.845761 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:50.845768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:50.845833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:50.875375 1055021 cri.go:89] found id: ""
	I1208 01:59:50.875398 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.875406 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:50.875412 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:50.875472 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:50.899812 1055021 cri.go:89] found id: ""
	I1208 01:59:50.899836 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.899846 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:50.899852 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:50.899911 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:50.925692 1055021 cri.go:89] found id: ""
	I1208 01:59:50.925717 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.925725 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:50.925735 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:50.925751 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:50.991330 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:50.991366 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:51.010240 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:51.010276 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:51.075773 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:51.075801 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:51.075813 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:51.104705 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:51.104737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:53.634984 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:53.645378 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:53.645451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:53.676623 1055021 cri.go:89] found id: ""
	I1208 01:59:53.676647 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.676657 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:53.676664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:53.676723 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:53.700948 1055021 cri.go:89] found id: ""
	I1208 01:59:53.700973 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.700982 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:53.700988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:53.701047 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:53.725665 1055021 cri.go:89] found id: ""
	I1208 01:59:53.725689 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.725698 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:53.725704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:53.725760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:53.750770 1055021 cri.go:89] found id: ""
	I1208 01:59:53.750794 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.750803 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:53.750809 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:53.750885 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:53.784279 1055021 cri.go:89] found id: ""
	I1208 01:59:53.784304 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.784312 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:53.784319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:53.784378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:53.812355 1055021 cri.go:89] found id: ""
	I1208 01:59:53.812381 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.812390 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:53.812396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:53.812456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:53.837608 1055021 cri.go:89] found id: ""
	I1208 01:59:53.837634 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.837642 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:53.837648 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:53.837709 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:53.863046 1055021 cri.go:89] found id: ""
	I1208 01:59:53.863076 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.863085 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:53.863095 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:53.863136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:53.928268 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:53.928309 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:53.945830 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:53.945860 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:54.012382 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:54.012407 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:54.012447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:54.043446 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:54.043481 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:56.571785 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:56.582156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:56.582228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:56.611270 1055021 cri.go:89] found id: ""
	I1208 01:59:56.611292 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.611301 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:56.611307 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:56.611371 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:56.638765 1055021 cri.go:89] found id: ""
	I1208 01:59:56.638788 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.638797 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:56.638802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:56.638888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:56.663341 1055021 cri.go:89] found id: ""
	I1208 01:59:56.663368 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.663377 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:56.663383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:56.663495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:56.688606 1055021 cri.go:89] found id: ""
	I1208 01:59:56.688633 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.688643 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:56.688649 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:56.688730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:56.714263 1055021 cri.go:89] found id: ""
	I1208 01:59:56.714287 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.714296 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:56.714303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:56.714379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:56.738023 1055021 cri.go:89] found id: ""
	I1208 01:59:56.738047 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.738056 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:56.738062 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:56.738141 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:56.767926 1055021 cri.go:89] found id: ""
	I1208 01:59:56.767951 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.767960 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:56.767966 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:56.768071 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:56.801241 1055021 cri.go:89] found id: ""
	I1208 01:59:56.801268 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.801277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:56.801286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:56.801317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:56.873621 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:56.873657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:56.891086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:56.891116 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:56.956286 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:56.956306 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:56.956319 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:56.991921 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:56.991965 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.538010 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:59.548530 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:59.548598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:59.574677 1055021 cri.go:89] found id: ""
	I1208 01:59:59.574701 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.574709 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:59.574716 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:59.574779 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:59.600311 1055021 cri.go:89] found id: ""
	I1208 01:59:59.600337 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.600346 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:59.600352 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:59.600410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:59.627833 1055021 cri.go:89] found id: ""
	I1208 01:59:59.627858 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.627867 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:59.627873 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:59.627946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:59.652005 1055021 cri.go:89] found id: ""
	I1208 01:59:59.652029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.652038 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:59.652044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:59.652138 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:59.676487 1055021 cri.go:89] found id: ""
	I1208 01:59:59.676511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.676519 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:59.676525 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:59.676581 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:59.701988 1055021 cri.go:89] found id: ""
	I1208 01:59:59.702012 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.702020 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:59.702027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:59.702085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:59.726000 1055021 cri.go:89] found id: ""
	I1208 01:59:59.726025 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.726034 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:59.726040 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:59.726100 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:59.751097 1055021 cri.go:89] found id: ""
	I1208 01:59:59.751123 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.751131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:59.751141 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:59.751154 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:59.832931 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:59.832954 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:59.832966 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:59.862055 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:59.862089 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.890385 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:59.890414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:59.959793 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:59.959825 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.477852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:02.489201 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:02.489312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:02.516698 1055021 cri.go:89] found id: ""
	I1208 02:00:02.516725 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.516734 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:02.516741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:02.516825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:02.545938 1055021 cri.go:89] found id: ""
	I1208 02:00:02.545965 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.545974 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:02.545980 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:02.546051 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:02.574765 1055021 cri.go:89] found id: ""
	I1208 02:00:02.574799 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.574808 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:02.574815 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:02.574920 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:02.600958 1055021 cri.go:89] found id: ""
	I1208 02:00:02.600984 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.600992 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:02.601001 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:02.601061 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:02.627836 1055021 cri.go:89] found id: ""
	I1208 02:00:02.627862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.627872 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:02.627879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:02.627942 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:02.654803 1055021 cri.go:89] found id: ""
	I1208 02:00:02.654831 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.654864 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:02.654872 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:02.654938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:02.682455 1055021 cri.go:89] found id: ""
	I1208 02:00:02.682487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.682503 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:02.682510 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:02.682577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:02.709680 1055021 cri.go:89] found id: ""
	I1208 02:00:02.709709 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.709718 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:02.709728 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:02.709741 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:02.776682 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:02.776761 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.795697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:02.795794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:02.873752 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:02.873773 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:02.873787 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:02.903468 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:02.903511 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.438786 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:05.449615 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:05.449691 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:05.475122 1055021 cri.go:89] found id: ""
	I1208 02:00:05.475147 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.475156 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:05.475162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:05.475223 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:05.500749 1055021 cri.go:89] found id: ""
	I1208 02:00:05.500772 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.500781 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:05.500788 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:05.500854 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:05.526357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.526435 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.526456 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:05.526475 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:05.526564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:05.553466 1055021 cri.go:89] found id: ""
	I1208 02:00:05.553493 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.553502 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:05.553509 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:05.553570 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:05.583119 1055021 cri.go:89] found id: ""
	I1208 02:00:05.583145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.583154 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:05.583161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:05.583229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:05.613357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.613385 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.613394 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:05.613401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:05.613465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:05.639303 1055021 cri.go:89] found id: ""
	I1208 02:00:05.639328 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.639337 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:05.639358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:05.639422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:05.666333 1055021 cri.go:89] found id: ""
	I1208 02:00:05.666372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.666382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:05.666392 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:05.666405 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.696869 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:05.696901 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:05.762499 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:05.762536 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:05.780857 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:05.780889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:05.848522 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:05.848585 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:05.848598 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.377424 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:08.388192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:08.388265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:08.414029 1055021 cri.go:89] found id: ""
	I1208 02:00:08.414050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.414059 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:08.414065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:08.414127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:08.441760 1055021 cri.go:89] found id: ""
	I1208 02:00:08.441782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.441790 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:08.441796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:08.441857 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:08.466751 1055021 cri.go:89] found id: ""
	I1208 02:00:08.466774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.466783 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:08.466789 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:08.466870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:08.493249 1055021 cri.go:89] found id: ""
	I1208 02:00:08.493272 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.493280 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:08.493287 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:08.493345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:08.519677 1055021 cri.go:89] found id: ""
	I1208 02:00:08.519707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.519716 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:08.519722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:08.519788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:08.545435 1055021 cri.go:89] found id: ""
	I1208 02:00:08.545460 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.545469 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:08.545476 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:08.545538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:08.576588 1055021 cri.go:89] found id: ""
	I1208 02:00:08.576612 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.576621 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:08.576628 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:08.576719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:08.602665 1055021 cri.go:89] found id: ""
	I1208 02:00:08.602689 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.602697 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:08.602706 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:08.602737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:08.668015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:08.668065 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:08.685174 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:08.685203 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:08.750092 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:08.750113 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:08.750127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.781244 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:08.781278 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.323549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:11.333988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:11.334059 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:11.359294 1055021 cri.go:89] found id: ""
	I1208 02:00:11.359316 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.359325 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:11.359331 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:11.359391 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:11.385252 1055021 cri.go:89] found id: ""
	I1208 02:00:11.385274 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.385283 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:11.385289 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:11.385354 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:11.411462 1055021 cri.go:89] found id: ""
	I1208 02:00:11.411485 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.411494 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:11.411501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:11.411560 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:11.437020 1055021 cri.go:89] found id: ""
	I1208 02:00:11.437043 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.437052 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:11.437059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:11.437142 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:11.462749 1055021 cri.go:89] found id: ""
	I1208 02:00:11.462774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.462788 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:11.462795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:11.462912 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:11.487618 1055021 cri.go:89] found id: ""
	I1208 02:00:11.487642 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.487650 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:11.487656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:11.487738 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:11.517338 1055021 cri.go:89] found id: ""
	I1208 02:00:11.517411 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.517435 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:11.517454 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:11.517582 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:11.543576 1055021 cri.go:89] found id: ""
	I1208 02:00:11.543608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.543618 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:11.543670 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:11.543687 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:11.605714 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:11.605738 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:11.605754 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:11.634573 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:11.634608 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.663270 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:11.663297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:11.728036 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:11.728073 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.245900 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:14.259346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:14.259447 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:14.292891 1055021 cri.go:89] found id: ""
	I1208 02:00:14.292913 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.292922 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:14.292928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:14.292995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:14.326384 1055021 cri.go:89] found id: ""
	I1208 02:00:14.326408 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.326418 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:14.326425 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:14.326485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:14.354623 1055021 cri.go:89] found id: ""
	I1208 02:00:14.354646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.354654 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:14.354660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:14.354719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:14.382160 1055021 cri.go:89] found id: ""
	I1208 02:00:14.382187 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.382196 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:14.382203 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:14.382261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:14.408072 1055021 cri.go:89] found id: ""
	I1208 02:00:14.408141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.408166 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:14.408184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:14.408273 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:14.433739 1055021 cri.go:89] found id: ""
	I1208 02:00:14.433767 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.433776 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:14.433783 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:14.433889 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:14.460882 1055021 cri.go:89] found id: ""
	I1208 02:00:14.460906 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.460914 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:14.460921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:14.461002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:14.486630 1055021 cri.go:89] found id: ""
	I1208 02:00:14.486707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.486732 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:14.486755 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:14.486781 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:14.552732 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:14.552769 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.570940 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:14.570975 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:14.636277 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:14.636301 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:14.636317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:14.664410 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:14.664447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:17.192894 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:17.203129 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:17.203200 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:17.228497 1055021 cri.go:89] found id: ""
	I1208 02:00:17.228519 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.228528 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:17.228534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:17.228598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:17.253841 1055021 cri.go:89] found id: ""
	I1208 02:00:17.253862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.253871 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:17.253887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:17.253945 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:17.284067 1055021 cri.go:89] found id: ""
	I1208 02:00:17.284088 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.284097 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:17.284103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:17.284162 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:17.320641 1055021 cri.go:89] found id: ""
	I1208 02:00:17.320668 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.320678 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:17.320684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:17.320748 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:17.347071 1055021 cri.go:89] found id: ""
	I1208 02:00:17.347094 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.347103 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:17.347109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:17.347227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:17.373328 1055021 cri.go:89] found id: ""
	I1208 02:00:17.373357 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.373366 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:17.373372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:17.373439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:17.400408 1055021 cri.go:89] found id: ""
	I1208 02:00:17.400437 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.400446 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:17.400456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:17.400515 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:17.426232 1055021 cri.go:89] found id: ""
	I1208 02:00:17.426268 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.426277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:17.426286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:17.426298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:17.491052 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:17.491092 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:17.509546 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:17.509575 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:17.578008 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:17.578068 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:17.578090 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:17.606330 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:17.606368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:20.139003 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:20.149823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:20.149894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:20.176541 1055021 cri.go:89] found id: ""
	I1208 02:00:20.176568 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.176577 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:20.176583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:20.176647 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:20.209117 1055021 cri.go:89] found id: ""
	I1208 02:00:20.209141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.209149 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:20.209156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:20.209222 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:20.235819 1055021 cri.go:89] found id: ""
	I1208 02:00:20.235846 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.235861 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:20.235867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:20.235933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:20.268968 1055021 cri.go:89] found id: ""
	I1208 02:00:20.268997 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.269006 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:20.269019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:20.269079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:20.302684 1055021 cri.go:89] found id: ""
	I1208 02:00:20.302712 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.302721 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:20.302728 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:20.302814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:20.330459 1055021 cri.go:89] found id: ""
	I1208 02:00:20.330535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.330550 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:20.330557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:20.330632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:20.358743 1055021 cri.go:89] found id: ""
	I1208 02:00:20.358778 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.358787 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:20.358793 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:20.358881 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:20.384853 1055021 cri.go:89] found id: ""
	I1208 02:00:20.384883 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.384892 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:20.384909 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:20.384921 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:20.450466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:20.450505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:20.468842 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:20.468872 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:20.533689 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:20.533717 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:20.533732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:20.561211 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:20.561245 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.093217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:23.103855 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:23.103935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:23.129008 1055021 cri.go:89] found id: ""
	I1208 02:00:23.129084 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.129113 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:23.129122 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:23.129192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:23.154045 1055021 cri.go:89] found id: ""
	I1208 02:00:23.154071 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.154079 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:23.154086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:23.154144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:23.179982 1055021 cri.go:89] found id: ""
	I1208 02:00:23.180009 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.180018 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:23.180025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:23.180085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:23.205725 1055021 cri.go:89] found id: ""
	I1208 02:00:23.205751 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.205760 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:23.205767 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:23.205825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:23.233180 1055021 cri.go:89] found id: ""
	I1208 02:00:23.233206 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.233214 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:23.233221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:23.233280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:23.260814 1055021 cri.go:89] found id: ""
	I1208 02:00:23.260841 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.260850 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:23.260856 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:23.260915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:23.289337 1055021 cri.go:89] found id: ""
	I1208 02:00:23.289369 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.289379 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:23.289384 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:23.289451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:23.326356 1055021 cri.go:89] found id: ""
	I1208 02:00:23.326383 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.326392 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:23.326401 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:23.326414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:23.344175 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:23.344207 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:23.409693 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:23.409767 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:23.409793 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:23.437814 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:23.437848 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.472006 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:23.472034 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.036954 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:26.050218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:26.050295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:26.084077 1055021 cri.go:89] found id: ""
	I1208 02:00:26.084101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.084110 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:26.084117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:26.084179 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:26.115433 1055021 cri.go:89] found id: ""
	I1208 02:00:26.115458 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.115467 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:26.115473 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:26.115548 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:26.142798 1055021 cri.go:89] found id: ""
	I1208 02:00:26.142821 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.142829 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:26.142836 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:26.142923 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:26.169427 1055021 cri.go:89] found id: ""
	I1208 02:00:26.169449 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.169457 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:26.169465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:26.169523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:26.196837 1055021 cri.go:89] found id: ""
	I1208 02:00:26.196863 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.196873 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:26.196879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:26.196940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:26.222671 1055021 cri.go:89] found id: ""
	I1208 02:00:26.222694 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.222702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:26.222709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:26.222770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:26.258674 1055021 cri.go:89] found id: ""
	I1208 02:00:26.258696 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.258705 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:26.258711 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:26.258769 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:26.297463 1055021 cri.go:89] found id: ""
	I1208 02:00:26.297486 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.297496 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:26.297505 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:26.297520 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:26.329140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:26.329223 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:26.359625 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:26.359657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.424937 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:26.424974 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:26.443260 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:26.443293 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:26.509592 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.010492 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:29.023086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:29.023160 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:29.051358 1055021 cri.go:89] found id: ""
	I1208 02:00:29.051380 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.051389 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:29.051395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:29.051456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:29.085536 1055021 cri.go:89] found id: ""
	I1208 02:00:29.085566 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.085575 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:29.085583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:29.085649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:29.114380 1055021 cri.go:89] found id: ""
	I1208 02:00:29.114407 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.114416 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:29.114422 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:29.114483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:29.139608 1055021 cri.go:89] found id: ""
	I1208 02:00:29.139697 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.139713 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:29.139722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:29.139800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:29.167030 1055021 cri.go:89] found id: ""
	I1208 02:00:29.167055 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.167100 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:29.167107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:29.167173 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:29.191898 1055021 cri.go:89] found id: ""
	I1208 02:00:29.191920 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.191929 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:29.191935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:29.191992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:29.216839 1055021 cri.go:89] found id: ""
	I1208 02:00:29.216870 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.216879 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:29.216889 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:29.216975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:29.246347 1055021 cri.go:89] found id: ""
	I1208 02:00:29.246372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.246382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:29.246391 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:29.246421 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:29.266473 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:29.266509 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:29.345611 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.345636 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:29.345648 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:29.375020 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:29.375060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:29.402360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:29.402386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:31.967515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:31.978076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:31.978147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:32.018381 1055021 cri.go:89] found id: ""
	I1208 02:00:32.018457 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.018480 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:32.018500 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:32.018611 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:32.054678 1055021 cri.go:89] found id: ""
	I1208 02:00:32.054700 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.054709 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:32.054715 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:32.054775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:32.085659 1055021 cri.go:89] found id: ""
	I1208 02:00:32.085686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.085695 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:32.085701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:32.085809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:32.112827 1055021 cri.go:89] found id: ""
	I1208 02:00:32.112892 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.112907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:32.112914 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:32.112973 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:32.141486 1055021 cri.go:89] found id: ""
	I1208 02:00:32.141513 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.141521 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:32.141527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:32.141591 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:32.166463 1055021 cri.go:89] found id: ""
	I1208 02:00:32.166489 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.166498 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:32.166504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:32.166566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:32.196018 1055021 cri.go:89] found id: ""
	I1208 02:00:32.196086 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.196111 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:32.196125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:32.196198 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:32.219763 1055021 cri.go:89] found id: ""
	I1208 02:00:32.219802 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.219812 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:32.219821 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:32.219834 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:32.237401 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:32.237431 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:32.335697 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:32.335720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:32.335732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:32.364998 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:32.365043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:32.394072 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:32.394099 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:34.958230 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:34.968535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:34.968606 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:34.993490 1055021 cri.go:89] found id: ""
	I1208 02:00:34.993515 1055021 logs.go:282] 0 containers: []
	W1208 02:00:34.993524 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:34.993531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:34.993588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:35.026482 1055021 cri.go:89] found id: ""
	I1208 02:00:35.026511 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.026521 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:35.026529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:35.026595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:35.062109 1055021 cri.go:89] found id: ""
	I1208 02:00:35.062138 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.062147 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:35.062154 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:35.062218 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:35.094672 1055021 cri.go:89] found id: ""
	I1208 02:00:35.094706 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.094715 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:35.094722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:35.094784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:35.120981 1055021 cri.go:89] found id: ""
	I1208 02:00:35.121007 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.121016 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:35.121022 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:35.121087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:35.147283 1055021 cri.go:89] found id: ""
	I1208 02:00:35.147310 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.147321 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:35.147329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:35.147392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:35.174946 1055021 cri.go:89] found id: ""
	I1208 02:00:35.175038 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.175075 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:35.175115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:35.175224 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:35.205558 1055021 cri.go:89] found id: ""
	I1208 02:00:35.205583 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.205592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:35.205601 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:35.205636 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:35.273454 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:35.273537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:35.294102 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:35.294182 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:35.363206 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:35.363227 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:35.363240 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:35.391418 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:35.391457 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:37.922946 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:37.933320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:37.933392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:37.959213 1055021 cri.go:89] found id: ""
	I1208 02:00:37.959237 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.959247 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:37.959253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:37.959311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:37.983822 1055021 cri.go:89] found id: ""
	I1208 02:00:37.983844 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.983853 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:37.983859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:37.983917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:38.015881 1055021 cri.go:89] found id: ""
	I1208 02:00:38.015909 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.015919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:38.015927 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:38.015994 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:38.047948 1055021 cri.go:89] found id: ""
	I1208 02:00:38.047971 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.047979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:38.047985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:38.048049 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:38.098187 1055021 cri.go:89] found id: ""
	I1208 02:00:38.098216 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.098227 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:38.098234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:38.098298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:38.122930 1055021 cri.go:89] found id: ""
	I1208 02:00:38.122952 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.122960 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:38.122967 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:38.123028 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:38.148405 1055021 cri.go:89] found id: ""
	I1208 02:00:38.148439 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.148449 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:38.148455 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:38.148513 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:38.174446 1055021 cri.go:89] found id: ""
	I1208 02:00:38.174522 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.174544 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:38.174565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:38.174602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:38.239470 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:38.239505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:38.257924 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:38.258079 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:38.328235 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:38.328302 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:38.328321 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:38.356585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:38.356619 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:40.887527 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:40.897939 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:40.898011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:40.922663 1055021 cri.go:89] found id: ""
	I1208 02:00:40.922686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.922695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:40.922701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:40.922760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:40.947304 1055021 cri.go:89] found id: ""
	I1208 02:00:40.947371 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.947397 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:40.947409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:40.947484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:40.973263 1055021 cri.go:89] found id: ""
	I1208 02:00:40.973290 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.973299 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:40.973305 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:40.973365 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:40.998615 1055021 cri.go:89] found id: ""
	I1208 02:00:40.998648 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.998658 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:40.998665 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:40.998735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:41.034153 1055021 cri.go:89] found id: ""
	I1208 02:00:41.034180 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.034190 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:41.034196 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:41.034255 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:41.063886 1055021 cri.go:89] found id: ""
	I1208 02:00:41.063916 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.063925 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:41.063931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:41.063993 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:41.090937 1055021 cri.go:89] found id: ""
	I1208 02:00:41.090966 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.090976 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:41.090982 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:41.091046 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:41.117814 1055021 cri.go:89] found id: ""
	I1208 02:00:41.117839 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.117849 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:41.117858 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:41.117870 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:41.182312 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:41.182348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:41.200044 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:41.200071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:41.273066 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:41.273095 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:41.273108 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:41.308256 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:41.308298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:43.843380 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:43.854135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:43.854204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:43.879332 1055021 cri.go:89] found id: ""
	I1208 02:00:43.879356 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.879365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:43.879371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:43.879431 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:43.903897 1055021 cri.go:89] found id: ""
	I1208 02:00:43.903921 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.903930 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:43.903935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:43.904010 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:43.928349 1055021 cri.go:89] found id: ""
	I1208 02:00:43.928377 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.928386 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:43.928396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:43.928453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:43.957013 1055021 cri.go:89] found id: ""
	I1208 02:00:43.957046 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.957060 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:43.957066 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:43.957137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:43.981711 1055021 cri.go:89] found id: ""
	I1208 02:00:43.981784 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.981819 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:43.981843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:43.981933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:44.021808 1055021 cri.go:89] found id: ""
	I1208 02:00:44.021842 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.021851 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:44.021859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:44.021940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:44.053536 1055021 cri.go:89] found id: ""
	I1208 02:00:44.053608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.053631 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:44.053650 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:44.053735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:44.087893 1055021 cri.go:89] found id: ""
	I1208 02:00:44.087958 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.087975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:44.087985 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:44.087997 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:44.153453 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:44.153493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:44.172720 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:44.172750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:44.242553 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:44.242575 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:44.242587 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:44.273804 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:44.273889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:46.805601 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:46.815929 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:46.815999 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:46.840623 1055021 cri.go:89] found id: ""
	I1208 02:00:46.840646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.840655 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:46.840661 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:46.840721 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:46.866056 1055021 cri.go:89] found id: ""
	I1208 02:00:46.866082 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.866090 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:46.866096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:46.866156 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:46.890598 1055021 cri.go:89] found id: ""
	I1208 02:00:46.890623 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.890632 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:46.890638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:46.890699 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:46.917031 1055021 cri.go:89] found id: ""
	I1208 02:00:46.917101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.917125 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:46.917142 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:46.917230 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:46.941427 1055021 cri.go:89] found id: ""
	I1208 02:00:46.941450 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.941459 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:46.941465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:46.941524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:46.971991 1055021 cri.go:89] found id: ""
	I1208 02:00:46.972015 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.972024 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:46.972031 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:46.972087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:47.000365 1055021 cri.go:89] found id: ""
	I1208 02:00:47.000393 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.000402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:47.000409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:47.000500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:47.039853 1055021 cri.go:89] found id: ""
	I1208 02:00:47.039934 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.039968 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:47.040014 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:47.040070 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:47.124159 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:47.124199 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:47.142393 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:47.142436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:47.204667 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:47.204688 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:47.204700 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:47.233531 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:47.233572 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:49.777314 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:49.787953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:49.788027 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:49.814344 1055021 cri.go:89] found id: ""
	I1208 02:00:49.814368 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.814376 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:49.814383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:49.814443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:49.843148 1055021 cri.go:89] found id: ""
	I1208 02:00:49.843172 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.843180 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:49.843187 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:49.843245 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:49.868221 1055021 cri.go:89] found id: ""
	I1208 02:00:49.868245 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.868253 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:49.868260 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:49.868319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:49.892756 1055021 cri.go:89] found id: ""
	I1208 02:00:49.892782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.892792 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:49.892799 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:49.892879 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:49.921697 1055021 cri.go:89] found id: ""
	I1208 02:00:49.921730 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.921738 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:49.921745 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:49.921818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:49.946935 1055021 cri.go:89] found id: ""
	I1208 02:00:49.947000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.947018 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:49.947025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:49.947102 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:49.972386 1055021 cri.go:89] found id: ""
	I1208 02:00:49.972410 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.972418 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:49.972427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:49.972485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:49.997299 1055021 cri.go:89] found id: ""
	I1208 02:00:49.997324 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.997332 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:49.997342 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:49.997354 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:50.024427 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:50.024465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:50.106428 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:50.106452 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:50.106466 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:50.134825 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:50.134944 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:50.164257 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:50.164286 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:52.731852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:52.743466 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:52.743547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:52.770730 1055021 cri.go:89] found id: ""
	I1208 02:00:52.770754 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.770763 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:52.770769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:52.770837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:52.795524 1055021 cri.go:89] found id: ""
	I1208 02:00:52.795547 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.795555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:52.795562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:52.795622 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:52.820947 1055021 cri.go:89] found id: ""
	I1208 02:00:52.820976 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.820986 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:52.820993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:52.821054 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:52.846461 1055021 cri.go:89] found id: ""
	I1208 02:00:52.846487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.846495 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:52.846502 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:52.846614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:52.876556 1055021 cri.go:89] found id: ""
	I1208 02:00:52.876582 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.876591 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:52.876598 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:52.876658 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:52.902890 1055021 cri.go:89] found id: ""
	I1208 02:00:52.902915 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.902924 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:52.902931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:52.902995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:52.927861 1055021 cri.go:89] found id: ""
	I1208 02:00:52.927936 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.927952 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:52.927960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:52.928018 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:52.952070 1055021 cri.go:89] found id: ""
	I1208 02:00:52.952093 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.952102 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:52.952111 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:52.952123 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:52.969988 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:52.970071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:53.047400 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:53.047420 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:53.047432 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:53.079007 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:53.079096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:53.110493 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:53.110518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:55.678655 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:55.689237 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:55.689308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:55.716663 1055021 cri.go:89] found id: ""
	I1208 02:00:55.716685 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.716694 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:55.716700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:55.716767 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:55.742016 1055021 cri.go:89] found id: ""
	I1208 02:00:55.742042 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.742051 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:55.742057 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:55.742117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:55.771093 1055021 cri.go:89] found id: ""
	I1208 02:00:55.771116 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.771125 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:55.771131 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:55.771192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:55.795221 1055021 cri.go:89] found id: ""
	I1208 02:00:55.795243 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.795252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:55.795258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:55.795321 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:55.824380 1055021 cri.go:89] found id: ""
	I1208 02:00:55.824402 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.824411 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:55.824417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:55.824482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:55.853339 1055021 cri.go:89] found id: ""
	I1208 02:00:55.853362 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.853370 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:55.853376 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:55.853439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:55.879120 1055021 cri.go:89] found id: ""
	I1208 02:00:55.879145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.879154 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:55.879160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:55.879229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:55.904782 1055021 cri.go:89] found id: ""
	I1208 02:00:55.904811 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.904820 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:55.904829 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:55.904840 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:55.936603 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:55.936627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:56.002394 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:56.002436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:56.025805 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:56.025962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:56.100621 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:56.100643 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:56.100655 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:58.632608 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:58.643205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:58.643281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:58.668717 1055021 cri.go:89] found id: ""
	I1208 02:00:58.668741 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.668750 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:58.668756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:58.668818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:58.693510 1055021 cri.go:89] found id: ""
	I1208 02:00:58.693535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.693543 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:58.693550 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:58.693614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:58.718959 1055021 cri.go:89] found id: ""
	I1208 02:00:58.719050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.719071 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:58.719079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:58.719153 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:58.743668 1055021 cri.go:89] found id: ""
	I1208 02:00:58.743691 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.743700 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:58.743707 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:58.743765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:58.772612 1055021 cri.go:89] found id: ""
	I1208 02:00:58.772679 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.772700 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:58.772718 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:58.772809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:58.798178 1055021 cri.go:89] found id: ""
	I1208 02:00:58.798204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.798212 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:58.798218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:58.798278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:58.822926 1055021 cri.go:89] found id: ""
	I1208 02:00:58.823000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.823018 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:58.823026 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:58.823097 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:58.849170 1055021 cri.go:89] found id: ""
	I1208 02:00:58.849204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.849214 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:58.849249 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:58.849273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:58.916845 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:58.916884 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:58.934980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:58.935008 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:59.004330 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:59.004355 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:59.004368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:59.034521 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:59.034558 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.569349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:01.581275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:01.581356 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:01.614013 1055021 cri.go:89] found id: ""
	I1208 02:01:01.614040 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.614052 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:01.614059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:01.614120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:01.642283 1055021 cri.go:89] found id: ""
	I1208 02:01:01.642311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.642321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:01.642327 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:01.642388 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:01.668888 1055021 cri.go:89] found id: ""
	I1208 02:01:01.668916 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.668927 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:01.668933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:01.669045 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:01.696848 1055021 cri.go:89] found id: ""
	I1208 02:01:01.696890 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.696917 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:01.696924 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:01.697002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:01.724280 1055021 cri.go:89] found id: ""
	I1208 02:01:01.724314 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.724323 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:01.724329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:01.724397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:01.757961 1055021 cri.go:89] found id: ""
	I1208 02:01:01.757993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.758002 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:01.758009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:01.758076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:01.791626 1055021 cri.go:89] found id: ""
	I1208 02:01:01.791652 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.791663 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:01.791669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:01.791734 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:01.824543 1055021 cri.go:89] found id: ""
	I1208 02:01:01.824614 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.824631 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:01.824643 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:01.824656 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.858339 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:01.858368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:01.923001 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:01.923043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:01.942107 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:01.942139 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:02.016342 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:02.016379 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:02.016393 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.550723 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:04.561389 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:04.561458 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:04.587293 1055021 cri.go:89] found id: ""
	I1208 02:01:04.587319 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.587329 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:04.587335 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:04.587398 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:04.612287 1055021 cri.go:89] found id: ""
	I1208 02:01:04.612313 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.612321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:04.612328 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:04.612389 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:04.637981 1055021 cri.go:89] found id: ""
	I1208 02:01:04.638006 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.638016 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:04.638023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:04.638083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:04.666122 1055021 cri.go:89] found id: ""
	I1208 02:01:04.666150 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.666159 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:04.666166 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:04.666228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:04.691775 1055021 cri.go:89] found id: ""
	I1208 02:01:04.691799 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.691807 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:04.691813 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:04.691877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:04.716584 1055021 cri.go:89] found id: ""
	I1208 02:01:04.716610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.716619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:04.716626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:04.716684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:04.741247 1055021 cri.go:89] found id: ""
	I1208 02:01:04.741284 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.741297 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:04.741303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:04.741394 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:04.777041 1055021 cri.go:89] found id: ""
	I1208 02:01:04.777070 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.777079 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:04.777088 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:04.777100 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:04.797448 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:04.797478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:04.865442 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:04.865465 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:04.865478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.893232 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:04.893270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:04.921152 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:04.921183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.486177 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:07.496522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:07.496608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:07.521126 1055021 cri.go:89] found id: ""
	I1208 02:01:07.521202 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.521226 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:07.521244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:07.521333 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:07.549393 1055021 cri.go:89] found id: ""
	I1208 02:01:07.549458 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.549483 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:07.549501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:07.549585 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:07.575624 1055021 cri.go:89] found id: ""
	I1208 02:01:07.575699 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.575715 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:07.575722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:07.575784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:07.604231 1055021 cri.go:89] found id: ""
	I1208 02:01:07.604296 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.604310 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:07.604317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:07.604377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:07.629146 1055021 cri.go:89] found id: ""
	I1208 02:01:07.629177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.629186 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:07.629192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:07.629267 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:07.654573 1055021 cri.go:89] found id: ""
	I1208 02:01:07.654598 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.654607 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:07.654614 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:07.654682 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:07.679672 1055021 cri.go:89] found id: ""
	I1208 02:01:07.679746 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.679762 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:07.679769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:07.679841 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:07.705327 1055021 cri.go:89] found id: ""
	I1208 02:01:07.705353 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.705362 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:07.705371 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:07.705386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.770583 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:07.770665 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:07.788444 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:07.788473 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:07.862214 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:07.862236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:07.862248 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:07.891006 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:07.891043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.422919 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:10.433424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:10.433496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:10.458269 1055021 cri.go:89] found id: ""
	I1208 02:01:10.458295 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.458303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:10.458319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:10.458397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:10.485114 1055021 cri.go:89] found id: ""
	I1208 02:01:10.485138 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.485146 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:10.485152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:10.485211 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:10.512785 1055021 cri.go:89] found id: ""
	I1208 02:01:10.512808 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.512817 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:10.512823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:10.512884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:10.538032 1055021 cri.go:89] found id: ""
	I1208 02:01:10.538057 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.538066 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:10.538072 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:10.538130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:10.568288 1055021 cri.go:89] found id: ""
	I1208 02:01:10.568311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.568364 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:10.568379 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:10.568445 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:10.593987 1055021 cri.go:89] found id: ""
	I1208 02:01:10.594012 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.594021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:10.594028 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:10.594087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:10.619212 1055021 cri.go:89] found id: ""
	I1208 02:01:10.619237 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.619245 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:10.619251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:10.619311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:10.645349 1055021 cri.go:89] found id: ""
	I1208 02:01:10.645384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.645393 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:10.645402 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:10.645414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:10.707691 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:10.707713 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:10.707726 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:10.735113 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:10.735148 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.768113 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:10.768142 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:10.843634 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:10.843672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.362994 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:13.373991 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:13.374082 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:13.400090 1055021 cri.go:89] found id: ""
	I1208 02:01:13.400127 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.400136 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:13.400143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:13.400212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:13.425846 1055021 cri.go:89] found id: ""
	I1208 02:01:13.425872 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.425881 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:13.425887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:13.425949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:13.451450 1055021 cri.go:89] found id: ""
	I1208 02:01:13.451478 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.451487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:13.451493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:13.451554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:13.476315 1055021 cri.go:89] found id: ""
	I1208 02:01:13.476341 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.476350 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:13.476357 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:13.476419 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:13.503320 1055021 cri.go:89] found id: ""
	I1208 02:01:13.503346 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.503355 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:13.503362 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:13.503430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:13.528258 1055021 cri.go:89] found id: ""
	I1208 02:01:13.528290 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.528299 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:13.528306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:13.528375 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:13.553751 1055021 cri.go:89] found id: ""
	I1208 02:01:13.553784 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.553794 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:13.553800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:13.553871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:13.580159 1055021 cri.go:89] found id: ""
	I1208 02:01:13.580183 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.580192 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:13.580200 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:13.580212 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:13.649628 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:13.649678 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.668358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:13.668451 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:13.739767 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:13.739835 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:13.739881 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:13.771646 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:13.771684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.306613 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:16.317302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:16.317372 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:16.343331 1055021 cri.go:89] found id: ""
	I1208 02:01:16.343356 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.343365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:16.343374 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:16.343433 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:16.369486 1055021 cri.go:89] found id: ""
	I1208 02:01:16.369507 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.369516 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:16.369522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:16.369589 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:16.394887 1055021 cri.go:89] found id: ""
	I1208 02:01:16.394911 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.394919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:16.394926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:16.394983 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:16.419429 1055021 cri.go:89] found id: ""
	I1208 02:01:16.419453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.419461 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:16.419467 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:16.419532 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:16.447941 1055021 cri.go:89] found id: ""
	I1208 02:01:16.448014 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.448038 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:16.448060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:16.448137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:16.477380 1055021 cri.go:89] found id: ""
	I1208 02:01:16.477404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.477414 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:16.477420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:16.477479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:16.502633 1055021 cri.go:89] found id: ""
	I1208 02:01:16.502658 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.502667 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:16.502674 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:16.502776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:16.532861 1055021 cri.go:89] found id: ""
	I1208 02:01:16.532886 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.532895 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:16.532904 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:16.532943 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.561207 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:16.561235 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:16.629585 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:16.629623 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:16.647847 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:16.647876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:16.713384 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:16.713404 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:16.713417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.242742 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:19.253432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:19.253496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:19.282053 1055021 cri.go:89] found id: ""
	I1208 02:01:19.282075 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.282091 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:19.282097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:19.282154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:19.317196 1055021 cri.go:89] found id: ""
	I1208 02:01:19.317218 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.317226 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:19.317232 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:19.317291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:19.344133 1055021 cri.go:89] found id: ""
	I1208 02:01:19.344155 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.344164 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:19.344170 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:19.344231 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:19.369544 1055021 cri.go:89] found id: ""
	I1208 02:01:19.369567 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.369576 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:19.369582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:19.369641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:19.394138 1055021 cri.go:89] found id: ""
	I1208 02:01:19.394161 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.394170 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:19.394176 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:19.394234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:19.421882 1055021 cri.go:89] found id: ""
	I1208 02:01:19.421906 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.421915 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:19.421921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:19.421991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:19.447254 1055021 cri.go:89] found id: ""
	I1208 02:01:19.447280 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.447289 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:19.447295 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:19.447359 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:19.471872 1055021 cri.go:89] found id: ""
	I1208 02:01:19.471898 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.471907 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:19.471916 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:19.471929 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:19.537545 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:19.537583 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:19.556105 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:19.556134 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:19.617255 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:19.617275 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:19.617288 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.645378 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:19.645413 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.176988 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:22.187407 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:22.187482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:22.216526 1055021 cri.go:89] found id: ""
	I1208 02:01:22.216551 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.216560 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:22.216567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:22.216629 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:22.241409 1055021 cri.go:89] found id: ""
	I1208 02:01:22.241437 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.241446 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:22.241452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:22.241510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:22.275844 1055021 cri.go:89] found id: ""
	I1208 02:01:22.275873 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.275882 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:22.275888 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:22.275951 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:22.304532 1055021 cri.go:89] found id: ""
	I1208 02:01:22.304560 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.304575 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:22.304582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:22.304640 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:22.347626 1055021 cri.go:89] found id: ""
	I1208 02:01:22.347653 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.347663 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:22.347669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:22.347730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:22.374178 1055021 cri.go:89] found id: ""
	I1208 02:01:22.374205 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.374215 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:22.374221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:22.374280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:22.404202 1055021 cri.go:89] found id: ""
	I1208 02:01:22.404229 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.404238 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:22.404244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:22.404311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:22.429827 1055021 cri.go:89] found id: ""
	I1208 02:01:22.429852 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.429861 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:22.429869 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:22.429880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.461216 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:22.461241 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:22.529595 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:22.529634 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:22.547808 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:22.547841 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:22.614795 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:22.614824 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:22.614836 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.143485 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:25.154329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:25.154413 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:25.180079 1055021 cri.go:89] found id: ""
	I1208 02:01:25.180105 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.180114 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:25.180121 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:25.180180 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:25.204723 1055021 cri.go:89] found id: ""
	I1208 02:01:25.204753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.204761 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:25.204768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:25.204825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:25.229571 1055021 cri.go:89] found id: ""
	I1208 02:01:25.229596 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.229604 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:25.229611 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:25.229669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:25.256859 1055021 cri.go:89] found id: ""
	I1208 02:01:25.256888 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.256896 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:25.256903 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:25.256966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:25.286130 1055021 cri.go:89] found id: ""
	I1208 02:01:25.286159 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.286169 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:25.286175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:25.286240 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:25.316764 1055021 cri.go:89] found id: ""
	I1208 02:01:25.316797 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.316806 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:25.316819 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:25.316888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:25.343685 1055021 cri.go:89] found id: ""
	I1208 02:01:25.343753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.343781 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:25.343795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:25.343874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:25.368793 1055021 cri.go:89] found id: ""
	I1208 02:01:25.368819 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.368828 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:25.368864 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:25.368882 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:25.386567 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:25.386594 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:25.454148 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:25.454180 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:25.454193 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.482372 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:25.482406 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:25.512534 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:25.512561 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.077014 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:28.087810 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:28.087929 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:28.117064 1055021 cri.go:89] found id: ""
	I1208 02:01:28.117090 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.117100 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:28.117107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:28.117166 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:28.142720 1055021 cri.go:89] found id: ""
	I1208 02:01:28.142747 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.142756 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:28.142763 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:28.142820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:28.169323 1055021 cri.go:89] found id: ""
	I1208 02:01:28.169349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.169357 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:28.169364 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:28.169423 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:28.198413 1055021 cri.go:89] found id: ""
	I1208 02:01:28.198441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.198450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:28.198456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:28.198538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:28.222900 1055021 cri.go:89] found id: ""
	I1208 02:01:28.222925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.222935 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:28.222941 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:28.223006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:28.252429 1055021 cri.go:89] found id: ""
	I1208 02:01:28.252453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.252462 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:28.252468 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:28.252528 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:28.285260 1055021 cri.go:89] found id: ""
	I1208 02:01:28.285287 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.285296 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:28.285302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:28.285362 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:28.322093 1055021 cri.go:89] found id: ""
	I1208 02:01:28.322122 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.322131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:28.322140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:28.322151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:28.358086 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:28.358113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.422767 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:28.422811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:28.441151 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:28.441185 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:28.510892 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:28.510919 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:28.510932 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.041345 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:31.056282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:31.056357 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:31.087982 1055021 cri.go:89] found id: ""
	I1208 02:01:31.088007 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.088017 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:31.088023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:31.088086 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:31.113983 1055021 cri.go:89] found id: ""
	I1208 02:01:31.114005 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.114014 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:31.114025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:31.114083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:31.141045 1055021 cri.go:89] found id: ""
	I1208 02:01:31.141069 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.141078 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:31.141085 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:31.141154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:31.167841 1055021 cri.go:89] found id: ""
	I1208 02:01:31.167864 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.167873 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:31.167880 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:31.167937 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:31.193449 1055021 cri.go:89] found id: ""
	I1208 02:01:31.193471 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.193479 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:31.193485 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:31.193542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:31.220825 1055021 cri.go:89] found id: ""
	I1208 02:01:31.220850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.220859 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:31.220865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:31.220926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:31.246036 1055021 cri.go:89] found id: ""
	I1208 02:01:31.246063 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.246071 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:31.246077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:31.246140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:31.282360 1055021 cri.go:89] found id: ""
	I1208 02:01:31.282388 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.282396 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:31.282405 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:31.282416 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:31.351320 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:31.351368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:31.370774 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:31.370887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:31.434743 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:31.434763 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:31.434775 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.462946 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:31.462982 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:33.992261 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:34.004797 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:34.004891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:34.044483 1055021 cri.go:89] found id: ""
	I1208 02:01:34.044506 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.044516 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:34.044523 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:34.044598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:34.072528 1055021 cri.go:89] found id: ""
	I1208 02:01:34.072564 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.072573 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:34.072580 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:34.072654 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:34.102278 1055021 cri.go:89] found id: ""
	I1208 02:01:34.102357 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.102379 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:34.102399 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:34.102487 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:34.129526 1055021 cri.go:89] found id: ""
	I1208 02:01:34.129601 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.129634 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:34.129656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:34.129776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:34.155663 1055021 cri.go:89] found id: ""
	I1208 02:01:34.155689 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.155698 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:34.155704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:34.155777 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:34.186951 1055021 cri.go:89] found id: ""
	I1208 02:01:34.186978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.186988 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:34.186996 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:34.187104 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:34.212379 1055021 cri.go:89] found id: ""
	I1208 02:01:34.212404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.212423 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:34.212430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:34.212489 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:34.238401 1055021 cri.go:89] found id: ""
	I1208 02:01:34.238438 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.238447 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:34.238456 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:34.238468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:34.278895 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:34.278970 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:34.356262 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:34.356303 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:34.376513 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:34.376545 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:34.447804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:34.447829 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:34.447843 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:36.976756 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:36.987574 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:36.987651 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:37.035351 1055021 cri.go:89] found id: ""
	I1208 02:01:37.035376 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.035386 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:37.035393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:37.035457 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:37.065004 1055021 cri.go:89] found id: ""
	I1208 02:01:37.065026 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.065034 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:37.065041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:37.065099 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:37.092804 1055021 cri.go:89] found id: ""
	I1208 02:01:37.092828 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.092837 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:37.092843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:37.092901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:37.117820 1055021 cri.go:89] found id: ""
	I1208 02:01:37.117849 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.117857 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:37.117865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:37.117924 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:37.143955 1055021 cri.go:89] found id: ""
	I1208 02:01:37.143978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.143987 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:37.143993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:37.144055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:37.173740 1055021 cri.go:89] found id: ""
	I1208 02:01:37.173764 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.173772 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:37.173779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:37.173838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:37.202687 1055021 cri.go:89] found id: ""
	I1208 02:01:37.202710 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.202719 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:37.202725 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:37.202786 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:37.229307 1055021 cri.go:89] found id: ""
	I1208 02:01:37.229331 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.229339 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:37.229347 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:37.229360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:37.247500 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:37.247530 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:37.329229 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:37.329252 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:37.329267 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:37.358197 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:37.358238 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:37.387860 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:37.387889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:39.956266 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:39.966752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:39.966823 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:39.991660 1055021 cri.go:89] found id: ""
	I1208 02:01:39.991686 1055021 logs.go:282] 0 containers: []
	W1208 02:01:39.991695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:39.991701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:39.991763 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:40.027823 1055021 cri.go:89] found id: ""
	I1208 02:01:40.027905 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.027928 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:40.027949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:40.028063 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:40.064388 1055021 cri.go:89] found id: ""
	I1208 02:01:40.064464 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.064487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:40.064508 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:40.064594 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:40.094787 1055021 cri.go:89] found id: ""
	I1208 02:01:40.094814 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.094832 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:40.094858 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:40.094922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:40.120620 1055021 cri.go:89] found id: ""
	I1208 02:01:40.120645 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.120654 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:40.120660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:40.120720 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:40.153070 1055021 cri.go:89] found id: ""
	I1208 02:01:40.153097 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.153106 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:40.153112 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:40.153183 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:40.181896 1055021 cri.go:89] found id: ""
	I1208 02:01:40.181925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.181935 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:40.181942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:40.182004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:40.209414 1055021 cri.go:89] found id: ""
	I1208 02:01:40.209441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.209450 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:40.209459 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:40.209470 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:40.274756 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:40.274858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:40.294225 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:40.294364 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:40.365754 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:40.365778 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:40.365791 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:40.394699 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:40.394732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:42.924136 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:42.934800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:42.934894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:42.961825 1055021 cri.go:89] found id: ""
	I1208 02:01:42.961850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.961859 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:42.961867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:42.961927 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:42.988379 1055021 cri.go:89] found id: ""
	I1208 02:01:42.988403 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.988412 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:42.988418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:42.988503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:43.023024 1055021 cri.go:89] found id: ""
	I1208 02:01:43.023047 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.023056 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:43.023063 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:43.023139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:43.057964 1055021 cri.go:89] found id: ""
	I1208 02:01:43.057993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.058001 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:43.058008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:43.058073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:43.088198 1055021 cri.go:89] found id: ""
	I1208 02:01:43.088221 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.088229 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:43.088235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:43.088295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:43.116924 1055021 cri.go:89] found id: ""
	I1208 02:01:43.116950 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.116959 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:43.116965 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:43.117042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:43.143043 1055021 cri.go:89] found id: ""
	I1208 02:01:43.143156 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.143172 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:43.143180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:43.143274 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:43.172524 1055021 cri.go:89] found id: ""
	I1208 02:01:43.172547 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.172556 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:43.172565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:43.172577 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:43.237127 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:43.237162 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:43.256485 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:43.256516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:43.325704 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:43.325725 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:43.325737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:43.354439 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:43.354477 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:45.885598 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:45.896346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:45.896416 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:45.921473 1055021 cri.go:89] found id: ""
	I1208 02:01:45.921499 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.921508 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:45.921515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:45.921576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:45.945701 1055021 cri.go:89] found id: ""
	I1208 02:01:45.945725 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.945734 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:45.945740 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:45.945800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:45.973191 1055021 cri.go:89] found id: ""
	I1208 02:01:45.973213 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.973222 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:45.973228 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:45.973289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:45.999665 1055021 cri.go:89] found id: ""
	I1208 02:01:45.999741 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.999764 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:45.999782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:45.999872 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:46.041104 1055021 cri.go:89] found id: ""
	I1208 02:01:46.041176 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.041202 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:46.041224 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:46.041300 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:46.076259 1055021 cri.go:89] found id: ""
	I1208 02:01:46.076332 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.076355 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:46.076373 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:46.076450 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:46.108098 1055021 cri.go:89] found id: ""
	I1208 02:01:46.108163 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.108179 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:46.108186 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:46.108247 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:46.134928 1055021 cri.go:89] found id: ""
	I1208 02:01:46.134964 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.134974 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:46.134983 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:46.134995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:46.164421 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:46.164498 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:46.233311 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:46.233358 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:46.253422 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:46.253502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:46.336577 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:46.336600 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:46.336614 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:48.865787 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:48.876567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:48.876642 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:48.901147 1055021 cri.go:89] found id: ""
	I1208 02:01:48.901177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.901185 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:48.901192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:48.901250 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:48.927326 1055021 cri.go:89] found id: ""
	I1208 02:01:48.927351 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.927360 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:48.927366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:48.927424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:48.951970 1055021 cri.go:89] found id: ""
	I1208 02:01:48.951994 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.952003 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:48.952009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:48.952073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:48.976700 1055021 cri.go:89] found id: ""
	I1208 02:01:48.976724 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.976732 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:48.976739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:48.976796 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:49.005321 1055021 cri.go:89] found id: ""
	I1208 02:01:49.005349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.005359 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:49.005366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:49.005432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:49.045336 1055021 cri.go:89] found id: ""
	I1208 02:01:49.045359 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.045368 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:49.045397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:49.045478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:49.074970 1055021 cri.go:89] found id: ""
	I1208 02:01:49.074997 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.075006 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:49.075012 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:49.075070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:49.100757 1055021 cri.go:89] found id: ""
	I1208 02:01:49.100780 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.100788 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:49.100796 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:49.100808 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:49.165827 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:49.165862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:49.183539 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:49.183618 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:49.249850 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:49.249874 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:49.249887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:49.280238 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:49.280270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:51.819515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:51.830251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:51.830329 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:51.856077 1055021 cri.go:89] found id: ""
	I1208 02:01:51.856098 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.856107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:51.856113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:51.856170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:51.882057 1055021 cri.go:89] found id: ""
	I1208 02:01:51.882086 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.882096 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:51.882103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:51.882170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:51.908531 1055021 cri.go:89] found id: ""
	I1208 02:01:51.908572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.908582 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:51.908588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:51.908649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:51.933571 1055021 cri.go:89] found id: ""
	I1208 02:01:51.933594 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.933603 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:51.933610 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:51.933671 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:51.959716 1055021 cri.go:89] found id: ""
	I1208 02:01:51.959777 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.959800 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:51.959825 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:51.959903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:51.985320 1055021 cri.go:89] found id: ""
	I1208 02:01:51.985384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.985409 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:51.985427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:51.985507 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:52.029640 1055021 cri.go:89] found id: ""
	I1208 02:01:52.029709 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.029736 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:52.029756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:52.029835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:52.060725 1055021 cri.go:89] found id: ""
	I1208 02:01:52.060803 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.060826 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:52.060848 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:52.060874 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:52.129431 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:52.129468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:52.148064 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:52.148095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:52.220103 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:52.220125 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:52.220137 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:52.248853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:52.248892 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:54.781319 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:54.791942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:54.792009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:54.816799 1055021 cri.go:89] found id: ""
	I1208 02:01:54.816821 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.816830 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:54.816835 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:54.816893 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:54.846002 1055021 cri.go:89] found id: ""
	I1208 02:01:54.846028 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.846036 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:54.846043 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:54.846101 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:54.870704 1055021 cri.go:89] found id: ""
	I1208 02:01:54.870729 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.870737 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:54.870744 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:54.870807 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:54.897236 1055021 cri.go:89] found id: ""
	I1208 02:01:54.897302 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.897327 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:54.897347 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:54.897432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:54.921729 1055021 cri.go:89] found id: ""
	I1208 02:01:54.921754 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.921763 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:54.921769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:54.921830 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:54.949586 1055021 cri.go:89] found id: ""
	I1208 02:01:54.949610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.949619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:54.949626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:54.949687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:54.976595 1055021 cri.go:89] found id: ""
	I1208 02:01:54.976618 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.976627 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:54.976633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:54.976708 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:55.012149 1055021 cri.go:89] found id: ""
	I1208 02:01:55.012179 1055021 logs.go:282] 0 containers: []
	W1208 02:01:55.012188 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:55.012198 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:55.012211 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:55.089182 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:55.089225 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:55.107781 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:55.107811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:55.175880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:55.175942 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:55.175962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:55.205060 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:55.205095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:57.733634 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:57.744236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:57.744308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:57.769149 1055021 cri.go:89] found id: ""
	I1208 02:01:57.769173 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.769182 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:57.769188 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:57.769246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:57.796831 1055021 cri.go:89] found id: ""
	I1208 02:01:57.796860 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.796869 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:57.796876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:57.796932 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:57.821809 1055021 cri.go:89] found id: ""
	I1208 02:01:57.821834 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.821844 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:57.821850 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:57.821917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:57.849385 1055021 cri.go:89] found id: ""
	I1208 02:01:57.849410 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.849418 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:57.849424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:57.849481 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:57.874645 1055021 cri.go:89] found id: ""
	I1208 02:01:57.874669 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.874678 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:57.874684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:57.874742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:57.899500 1055021 cri.go:89] found id: ""
	I1208 02:01:57.899572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.899608 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:57.899623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:57.899695 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:57.926677 1055021 cri.go:89] found id: ""
	I1208 02:01:57.926711 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.926720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:57.926727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:57.926833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:57.952159 1055021 cri.go:89] found id: ""
	I1208 02:01:57.952233 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.952249 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:57.952259 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:57.952271 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:58.017945 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:58.018082 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:58.036702 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:58.036877 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:58.109217 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:58.109239 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:58.109252 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:58.137424 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:58.137460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:00.669211 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:00.679729 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:00.679803 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:00.704116 1055021 cri.go:89] found id: ""
	I1208 02:02:00.704140 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.704149 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:00.704156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:00.704220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:00.728883 1055021 cri.go:89] found id: ""
	I1208 02:02:00.728908 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.728917 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:00.728923 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:00.728984 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:00.757361 1055021 cri.go:89] found id: ""
	I1208 02:02:00.757437 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.757453 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:00.757461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:00.757523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:00.784303 1055021 cri.go:89] found id: ""
	I1208 02:02:00.784332 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.784342 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:00.784349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:00.784420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:00.814794 1055021 cri.go:89] found id: ""
	I1208 02:02:00.814818 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.814827 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:00.814833 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:00.814915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:00.840985 1055021 cri.go:89] found id: ""
	I1208 02:02:00.841052 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.841069 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:00.841077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:00.841140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:00.869242 1055021 cri.go:89] found id: ""
	I1208 02:02:00.869268 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.869277 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:00.869283 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:00.869348 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:00.895515 1055021 cri.go:89] found id: ""
	I1208 02:02:00.895540 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.895549 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:00.895557 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:00.895600 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:00.963574 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:00.963611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:00.981868 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:00.981900 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:01.074452 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:01.074541 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:01.074602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:01.107635 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:01.107672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:03.643395 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:03.654301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:03.654370 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:03.680571 1055021 cri.go:89] found id: ""
	I1208 02:02:03.680609 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.680619 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:03.680626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:03.680696 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:03.709419 1055021 cri.go:89] found id: ""
	I1208 02:02:03.709444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.709453 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:03.709459 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:03.709518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:03.736028 1055021 cri.go:89] found id: ""
	I1208 02:02:03.736064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.736073 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:03.736079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:03.736140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:03.760906 1055021 cri.go:89] found id: ""
	I1208 02:02:03.760983 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.761005 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:03.761019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:03.761095 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:03.789527 1055021 cri.go:89] found id: ""
	I1208 02:02:03.789563 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.789572 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:03.789578 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:03.789655 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:03.817176 1055021 cri.go:89] found id: ""
	I1208 02:02:03.817203 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.817211 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:03.817218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:03.817277 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:03.847025 1055021 cri.go:89] found id: ""
	I1208 02:02:03.847053 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.847063 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:03.847070 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:03.847161 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:03.872945 1055021 cri.go:89] found id: ""
	I1208 02:02:03.872972 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.872981 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:03.872990 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:03.873002 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:03.938890 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:03.938927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:03.956669 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:03.956699 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:04.047856 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:04.047931 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:04.047960 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:04.084291 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:04.084328 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:06.621579 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:06.632180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:06.632262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:06.658187 1055021 cri.go:89] found id: ""
	I1208 02:02:06.658214 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.658223 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:06.658230 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:06.658289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:06.683455 1055021 cri.go:89] found id: ""
	I1208 02:02:06.683479 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.683487 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:06.683494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:06.683555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:06.709121 1055021 cri.go:89] found id: ""
	I1208 02:02:06.709147 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.709156 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:06.709162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:06.709220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:06.735601 1055021 cri.go:89] found id: ""
	I1208 02:02:06.735639 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.735649 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:06.735655 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:06.735717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:06.761793 1055021 cri.go:89] found id: ""
	I1208 02:02:06.761817 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.761826 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:06.761832 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:06.761891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:06.787053 1055021 cri.go:89] found id: ""
	I1208 02:02:06.787075 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.787092 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:06.787099 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:06.787168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:06.815964 1055021 cri.go:89] found id: ""
	I1208 02:02:06.815990 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.815999 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:06.816006 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:06.816067 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:06.841508 1055021 cri.go:89] found id: ""
	I1208 02:02:06.841534 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.841543 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:06.841552 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:06.841564 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:06.906588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:06.906627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:06.925347 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:06.925380 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:07.004820 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:07.004851 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:07.004865 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:07.038308 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:07.038348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.573053 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:09.583792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:09.583864 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:09.611232 1055021 cri.go:89] found id: ""
	I1208 02:02:09.611255 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.611265 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:09.611271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:09.611340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:09.636029 1055021 cri.go:89] found id: ""
	I1208 02:02:09.636054 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.636063 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:09.636069 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:09.636127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:09.662307 1055021 cri.go:89] found id: ""
	I1208 02:02:09.662334 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.662344 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:09.662350 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:09.662430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:09.688279 1055021 cri.go:89] found id: ""
	I1208 02:02:09.688304 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.688314 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:09.688320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:09.688385 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:09.717056 1055021 cri.go:89] found id: ""
	I1208 02:02:09.717081 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.717090 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:09.717097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:09.717206 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:09.745719 1055021 cri.go:89] found id: ""
	I1208 02:02:09.745744 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.745753 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:09.745760 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:09.745820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:09.774995 1055021 cri.go:89] found id: ""
	I1208 02:02:09.775020 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.775029 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:09.775035 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:09.775107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:09.800142 1055021 cri.go:89] found id: ""
	I1208 02:02:09.800165 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.800174 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:09.800183 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:09.800196 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:09.817474 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:09.817504 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:09.881166 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:09.881188 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:09.881201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:09.909282 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:09.909316 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.936890 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:09.936917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:12.504767 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:12.517010 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:12.517087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:12.552375 1055021 cri.go:89] found id: ""
	I1208 02:02:12.552405 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.552414 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:12.552421 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:12.552484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:12.581970 1055021 cri.go:89] found id: ""
	I1208 02:02:12.581993 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.582002 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:12.582008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:12.582070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:12.609191 1055021 cri.go:89] found id: ""
	I1208 02:02:12.609215 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.609223 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:12.609229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:12.609289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:12.634872 1055021 cri.go:89] found id: ""
	I1208 02:02:12.634900 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.634909 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:12.634917 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:12.634977 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:12.660600 1055021 cri.go:89] found id: ""
	I1208 02:02:12.660622 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.660631 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:12.660637 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:12.660698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:12.686371 1055021 cri.go:89] found id: ""
	I1208 02:02:12.686394 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.686402 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:12.686409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:12.686468 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:12.711549 1055021 cri.go:89] found id: ""
	I1208 02:02:12.711574 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.711583 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:12.711589 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:12.711650 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:12.736572 1055021 cri.go:89] found id: ""
	I1208 02:02:12.736599 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.736609 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:12.736619 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:12.736631 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:12.754919 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:12.754947 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:12.825472 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:12.825494 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:12.825508 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:12.854189 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:12.854226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:12.881205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:12.881233 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:15.446588 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:15.457588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:15.457660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:15.482738 1055021 cri.go:89] found id: ""
	I1208 02:02:15.482763 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.482772 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:15.482779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:15.482877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:15.511332 1055021 cri.go:89] found id: ""
	I1208 02:02:15.511364 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.511373 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:15.511380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:15.511446 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:15.555502 1055021 cri.go:89] found id: ""
	I1208 02:02:15.555528 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.555537 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:15.555543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:15.555604 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:15.584568 1055021 cri.go:89] found id: ""
	I1208 02:02:15.584590 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.584598 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:15.584604 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:15.584662 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:15.613196 1055021 cri.go:89] found id: ""
	I1208 02:02:15.613219 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.613228 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:15.613234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:15.613299 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:15.642375 1055021 cri.go:89] found id: ""
	I1208 02:02:15.642396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.642404 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:15.642411 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:15.642469 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:15.666701 1055021 cri.go:89] found id: ""
	I1208 02:02:15.666724 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.666733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:15.666739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:15.666804 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:15.694203 1055021 cri.go:89] found id: ""
	I1208 02:02:15.694226 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.694235 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:15.694244 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:15.694256 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:15.711985 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:15.712018 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:15.783845 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:15.783867 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:15.783880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:15.812138 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:15.812172 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:15.841785 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:15.841815 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.407879 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:18.418616 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:18.418687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:18.452125 1055021 cri.go:89] found id: ""
	I1208 02:02:18.452149 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.452158 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:18.452165 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:18.452226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:18.484590 1055021 cri.go:89] found id: ""
	I1208 02:02:18.484618 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.484627 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:18.484633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:18.484693 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:18.521073 1055021 cri.go:89] found id: ""
	I1208 02:02:18.521101 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.521111 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:18.521117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:18.521195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:18.552106 1055021 cri.go:89] found id: ""
	I1208 02:02:18.552131 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.552142 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:18.552149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:18.552234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:18.583000 1055021 cri.go:89] found id: ""
	I1208 02:02:18.583026 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.583034 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:18.583041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:18.583108 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:18.608873 1055021 cri.go:89] found id: ""
	I1208 02:02:18.608901 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.608909 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:18.608916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:18.608975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:18.638459 1055021 cri.go:89] found id: ""
	I1208 02:02:18.638482 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.638491 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:18.638497 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:18.638554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:18.664652 1055021 cri.go:89] found id: ""
	I1208 02:02:18.664678 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.664687 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:18.664696 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:18.664708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:18.727887 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:18.727909 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:18.727922 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:18.756733 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:18.756768 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:18.784791 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:18.784819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.854704 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:18.854747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.373144 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:21.384002 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:21.384076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:21.408827 1055021 cri.go:89] found id: ""
	I1208 02:02:21.408851 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.408860 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:21.408866 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:21.408926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:21.437335 1055021 cri.go:89] found id: ""
	I1208 02:02:21.437366 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.437375 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:21.437380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:21.437440 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:21.461726 1055021 cri.go:89] found id: ""
	I1208 02:02:21.461753 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.461762 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:21.461768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:21.461827 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:21.486068 1055021 cri.go:89] found id: ""
	I1208 02:02:21.486095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.486104 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:21.486110 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:21.486168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:21.521646 1055021 cri.go:89] found id: ""
	I1208 02:02:21.521671 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.521679 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:21.521686 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:21.521754 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:21.549687 1055021 cri.go:89] found id: ""
	I1208 02:02:21.549714 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.549723 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:21.549730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:21.549789 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:21.584524 1055021 cri.go:89] found id: ""
	I1208 02:02:21.584600 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.584615 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:21.584623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:21.584686 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:21.613834 1055021 cri.go:89] found id: ""
	I1208 02:02:21.613859 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.613868 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:21.613877 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:21.613888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:21.679269 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:21.679305 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.696894 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:21.696924 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:21.763490 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:21.763525 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:21.763538 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:21.791788 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:21.791819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.320943 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:24.332441 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:24.332511 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:24.359381 1055021 cri.go:89] found id: ""
	I1208 02:02:24.359403 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.359412 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:24.359418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:24.359484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:24.385766 1055021 cri.go:89] found id: ""
	I1208 02:02:24.385789 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.385798 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:24.385804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:24.385870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:24.412597 1055021 cri.go:89] found id: ""
	I1208 02:02:24.412619 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.412633 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:24.412640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:24.412700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:24.438239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.438262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.438270 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:24.438277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:24.438336 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:24.465529 1055021 cri.go:89] found id: ""
	I1208 02:02:24.465551 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.465560 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:24.465566 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:24.465628 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:24.490130 1055021 cri.go:89] found id: ""
	I1208 02:02:24.490153 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.490162 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:24.490168 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:24.490228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:24.531239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.531262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.531271 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:24.531277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:24.531335 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:24.570624 1055021 cri.go:89] found id: ""
	I1208 02:02:24.570646 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.570654 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:24.570663 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:24.570676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:24.588822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:24.588852 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:24.650804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:24.650826 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:24.650858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:24.680022 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:24.680060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.708316 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:24.708352 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.274217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:27.287664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:27.287788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:27.318113 1055021 cri.go:89] found id: ""
	I1208 02:02:27.318193 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.318215 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:27.318234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:27.318332 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:27.344915 1055021 cri.go:89] found id: ""
	I1208 02:02:27.344943 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.344951 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:27.344958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:27.345024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:27.374469 1055021 cri.go:89] found id: ""
	I1208 02:02:27.374502 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.374512 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:27.374519 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:27.374588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:27.399626 1055021 cri.go:89] found id: ""
	I1208 02:02:27.399665 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.399674 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:27.399680 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:27.399753 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:27.429184 1055021 cri.go:89] found id: ""
	I1208 02:02:27.429222 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.429230 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:27.429236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:27.429303 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:27.453872 1055021 cri.go:89] found id: ""
	I1208 02:02:27.453910 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.453919 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:27.453926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:27.453996 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:27.479093 1055021 cri.go:89] found id: ""
	I1208 02:02:27.479117 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.479127 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:27.479134 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:27.479195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:27.513793 1055021 cri.go:89] found id: ""
	I1208 02:02:27.513820 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.513840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:27.513849 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:27.513862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:27.543879 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:27.543958 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:27.585714 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:27.585783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.651465 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:27.651502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:27.669169 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:27.669201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:27.732840 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.233103 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:30.244434 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:30.244504 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:30.286359 1055021 cri.go:89] found id: ""
	I1208 02:02:30.286381 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.286390 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:30.286396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:30.286455 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:30.317925 1055021 cri.go:89] found id: ""
	I1208 02:02:30.317947 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.317955 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:30.317960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:30.318020 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:30.352522 1055021 cri.go:89] found id: ""
	I1208 02:02:30.352543 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.352551 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:30.352557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:30.352619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:30.376895 1055021 cri.go:89] found id: ""
	I1208 02:02:30.376917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.376925 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:30.376932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:30.376989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:30.401457 1055021 cri.go:89] found id: ""
	I1208 02:02:30.401478 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.401487 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:30.401493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:30.401551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:30.428269 1055021 cri.go:89] found id: ""
	I1208 02:02:30.428291 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.428300 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:30.428306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:30.428366 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:30.452846 1055021 cri.go:89] found id: ""
	I1208 02:02:30.452869 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.452878 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:30.452884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:30.452946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:30.477617 1055021 cri.go:89] found id: ""
	I1208 02:02:30.477645 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.477655 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:30.477665 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:30.477676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:30.507758 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:30.507782 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:30.577724 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:30.577802 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:30.598108 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:30.598136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:30.663869 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.663892 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:30.663905 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.192012 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:33.202802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:33.202903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:33.229607 1055021 cri.go:89] found id: ""
	I1208 02:02:33.229629 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.229638 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:33.229645 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:33.229704 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:33.257802 1055021 cri.go:89] found id: ""
	I1208 02:02:33.257837 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.257847 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:33.257854 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:33.257913 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:33.289073 1055021 cri.go:89] found id: ""
	I1208 02:02:33.289095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.289103 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:33.289113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:33.289171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:33.317039 1055021 cri.go:89] found id: ""
	I1208 02:02:33.317060 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.317069 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:33.317075 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:33.317137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:33.342479 1055021 cri.go:89] found id: ""
	I1208 02:02:33.342500 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.342509 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:33.342515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:33.342577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:33.367849 1055021 cri.go:89] found id: ""
	I1208 02:02:33.367877 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.367886 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:33.367892 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:33.367950 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:33.393711 1055021 cri.go:89] found id: ""
	I1208 02:02:33.393739 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.393748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:33.393755 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:33.393818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:33.419264 1055021 cri.go:89] found id: ""
	I1208 02:02:33.419286 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.419295 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:33.419303 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:33.419320 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.446586 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:33.446620 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:33.474605 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:33.474633 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:33.546521 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:33.546562 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:33.567522 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:33.567553 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:33.633164 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.133387 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:36.145051 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:36.145130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:36.178396 1055021 cri.go:89] found id: ""
	I1208 02:02:36.178426 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.178434 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:36.178442 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:36.178500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:36.204662 1055021 cri.go:89] found id: ""
	I1208 02:02:36.204685 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.204694 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:36.204700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:36.204758 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:36.233744 1055021 cri.go:89] found id: ""
	I1208 02:02:36.233766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.233776 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:36.233782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:36.233844 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:36.271413 1055021 cri.go:89] found id: ""
	I1208 02:02:36.271436 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.271445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:36.271453 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:36.271518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:36.299867 1055021 cri.go:89] found id: ""
	I1208 02:02:36.299889 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.299898 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:36.299905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:36.299967 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:36.333748 1055021 cri.go:89] found id: ""
	I1208 02:02:36.333771 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.333779 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:36.333786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:36.333877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:36.359920 1055021 cri.go:89] found id: ""
	I1208 02:02:36.359944 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.359953 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:36.359959 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:36.360016 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:36.384561 1055021 cri.go:89] found id: ""
	I1208 02:02:36.384583 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.384592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:36.384600 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:36.384611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:36.449118 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:36.449153 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:36.469510 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:36.469537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:36.544911 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.544934 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:36.544972 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:36.577604 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:36.577640 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.106569 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:39.117314 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:39.117406 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:39.147330 1055021 cri.go:89] found id: ""
	I1208 02:02:39.147354 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.147362 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:39.147369 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:39.147429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:39.175702 1055021 cri.go:89] found id: ""
	I1208 02:02:39.175725 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.175733 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:39.175739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:39.175797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:39.209892 1055021 cri.go:89] found id: ""
	I1208 02:02:39.209917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.209926 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:39.209932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:39.209990 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:39.235210 1055021 cri.go:89] found id: ""
	I1208 02:02:39.235239 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.235248 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:39.235255 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:39.235312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:39.268421 1055021 cri.go:89] found id: ""
	I1208 02:02:39.268444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.268453 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:39.268460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:39.268520 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:39.308045 1055021 cri.go:89] found id: ""
	I1208 02:02:39.308070 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.308079 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:39.308086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:39.308152 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:39.338659 1055021 cri.go:89] found id: ""
	I1208 02:02:39.338684 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.338693 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:39.338699 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:39.338759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:39.369373 1055021 cri.go:89] found id: ""
	I1208 02:02:39.369396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.369405 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:39.369414 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:39.369426 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.401929 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:39.401959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:39.466665 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:39.466705 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:39.484758 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:39.484786 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:39.570718 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:39.570737 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:39.570750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.101949 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:42.135199 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:42.135361 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:42.190279 1055021 cri.go:89] found id: ""
	I1208 02:02:42.190367 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.190393 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:42.190415 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:42.190545 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:42.222777 1055021 cri.go:89] found id: ""
	I1208 02:02:42.222883 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.222911 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:42.222934 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:42.223043 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:42.257086 1055021 cri.go:89] found id: ""
	I1208 02:02:42.257169 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.257193 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:42.257217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:42.257340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:42.290338 1055021 cri.go:89] found id: ""
	I1208 02:02:42.290421 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.290445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:42.290464 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:42.290571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:42.321497 1055021 cri.go:89] found id: ""
	I1208 02:02:42.321567 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.321592 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:42.321612 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:42.321710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:42.351037 1055021 cri.go:89] found id: ""
	I1208 02:02:42.351157 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.351184 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:42.351205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:42.351308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:42.377225 1055021 cri.go:89] found id: ""
	I1208 02:02:42.377251 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.377259 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:42.377266 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:42.377324 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:42.403038 1055021 cri.go:89] found id: ""
	I1208 02:02:42.403064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.403073 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:42.403117 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:42.403130 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:42.468670 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:42.468709 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:42.486822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:42.486906 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:42.576804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:42.576828 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:42.576844 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.609307 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:42.609345 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:45.139048 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:45.153298 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:45.153393 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:45.190816 1055021 cri.go:89] found id: ""
	I1208 02:02:45.190864 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.190874 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:45.190882 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:45.190954 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:45.248053 1055021 cri.go:89] found id: ""
	I1208 02:02:45.248087 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.248097 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:45.248105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:45.248178 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:45.291403 1055021 cri.go:89] found id: ""
	I1208 02:02:45.291441 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.291506 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:45.291539 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:45.291685 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:45.327809 1055021 cri.go:89] found id: ""
	I1208 02:02:45.327885 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.327907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:45.327925 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:45.328011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:45.356269 1055021 cri.go:89] found id: ""
	I1208 02:02:45.356293 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.356302 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:45.356308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:45.356386 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:45.385189 1055021 cri.go:89] found id: ""
	I1208 02:02:45.385213 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.385222 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:45.385229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:45.385309 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:45.413524 1055021 cri.go:89] found id: ""
	I1208 02:02:45.413549 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.413558 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:45.413565 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:45.413652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:45.443469 1055021 cri.go:89] found id: ""
	I1208 02:02:45.443547 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.443563 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:45.443572 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:45.443584 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:45.515350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:45.515441 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:45.534931 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:45.534961 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:45.612239 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:45.612262 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:45.612274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:45.640465 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:45.640503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.170309 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:48.181762 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:48.181835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:48.209264 1055021 cri.go:89] found id: ""
	I1208 02:02:48.209288 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.209297 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:48.209303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:48.209364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:48.236743 1055021 cri.go:89] found id: ""
	I1208 02:02:48.236766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.236775 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:48.236782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:48.236847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:48.275731 1055021 cri.go:89] found id: ""
	I1208 02:02:48.275757 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.275765 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:48.275772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:48.275837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:48.311639 1055021 cri.go:89] found id: ""
	I1208 02:02:48.311667 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.311676 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:48.311682 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:48.311744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:48.342675 1055021 cri.go:89] found id: ""
	I1208 02:02:48.342711 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.342720 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:48.342726 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:48.342808 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:48.369485 1055021 cri.go:89] found id: ""
	I1208 02:02:48.369519 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.369528 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:48.369535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:48.369608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:48.396744 1055021 cri.go:89] found id: ""
	I1208 02:02:48.396769 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.396778 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:48.396785 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:48.396847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:48.422870 1055021 cri.go:89] found id: ""
	I1208 02:02:48.422894 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.422904 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:48.422913 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:48.422927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.454409 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:48.454482 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:48.522366 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:48.522456 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:48.541233 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:48.541391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:48.617160 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:48.617226 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:48.617247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:51.146382 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:51.160619 1055021 out.go:203] 
	W1208 02:02:51.163425 1055021 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 02:02:51.163473 1055021 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 02:02:51.163484 1055021 out.go:285] * Related issues:
	W1208 02:02:51.163498 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 02:02:51.163517 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 02:02:51.166282 1055021 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317270944Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317325255Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317374683Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317435303Z" level=info msg="RDT not available in the host system"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317500313Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318427518Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318519039Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318582121Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319471993Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319585217Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319774265Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.320528701Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321124572Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321312036Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371792319Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371951033Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372008469Z" level=info msg="Create NRI interface"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372105816Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372118829Z" level=info msg="runtime interface created"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372130251Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372136659Z" level=info msg="runtime interface starting up..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372142583Z" level=info msg="starting plugins..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372154743Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372216209Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:56:47 newest-cni-448023 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:54.364419   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:54.365188   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:54.366980   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:54.367638   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:54.369225   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:02:54 up  6:45,  0 user,  load average: 1.09, 0.74, 1.10
	Linux newest-cni-448023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:02:51 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:52 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 08 02:02:52 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:52 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:52 newest-cni-448023 kubelet[13378]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:52 newest-cni-448023 kubelet[13378]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:52 newest-cni-448023 kubelet[13378]: E1208 02:02:52.322359   13378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:52 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:52 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13399]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13399]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13399]: E1208 02:02:53.095384   13399 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13405]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13405]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:53 newest-cni-448023 kubelet[13405]: E1208 02:02:53.818088   13405 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:53 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (396.279861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-448023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (375.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 01:57:46.336104  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 01:58:51.934447  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 01:59:52.721959  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:00:14.997756  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:00:17.461551  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:00:34.379571  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:00:45.329892  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:02:08.400565  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:02:46.336325  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:03:51.934477  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:04:52.721828  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:05:34.379564  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:05:45.329900  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 2 (356.488314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1047287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:50:50.554953574Z",
	            "FinishedAt": "2025-12-08T01:50:49.214340581Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eaeeec708b96ab10f53f5e7226e115539fe166bf63ca544042e974e7018b260",
	            "SandboxKey": "/var/run/docker/netns/6eaeeec708b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:00:7d:ce:0b:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "795d8a30b86237e9ff6e670d6bc504ea3f9738fbb154a7d1d8e6085bd1fb8cce",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 2 (331.633316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-000739 sudo systemctl status kubelet --all --full --no-pager                                                                     │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo systemctl cat docker --no-pager                                                                                      │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /etc/docker/daemon.json                                                                                          │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo docker system info                                                                                                   │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cri-dockerd --version                                                                                                │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	│ ssh     │ -p auto-000739 sudo systemctl cat containerd --no-pager                                                                                  │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo cat /etc/containerd/config.toml                                                                                      │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo containerd config dump                                                                                               │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo systemctl cat crio --no-pager                                                                                        │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ ssh     │ -p auto-000739 sudo crio config                                                                                                          │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ delete  │ -p auto-000739                                                                                                                           │ auto-000739    │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │ 08 Dec 25 02:04 UTC │
	│ start   │ -p kindnet-000739 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-000739 │ jenkins │ v1.37.0 │ 08 Dec 25 02:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 02:04:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 02:04:59.267108 1079950 out.go:360] Setting OutFile to fd 1 ...
	I1208 02:04:59.267519 1079950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:04:59.267565 1079950 out.go:374] Setting ErrFile to fd 2...
	I1208 02:04:59.267588 1079950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:04:59.267919 1079950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 02:04:59.268392 1079950 out.go:368] Setting JSON to false
	I1208 02:04:59.269351 1079950 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24432,"bootTime":1765135068,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 02:04:59.269459 1079950 start.go:143] virtualization:  
	I1208 02:04:59.272757 1079950 out.go:179] * [kindnet-000739] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 02:04:59.276980 1079950 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 02:04:59.277149 1079950 notify.go:221] Checking for updates...
	I1208 02:04:59.281074 1079950 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 02:04:59.284292 1079950 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 02:04:59.287354 1079950 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 02:04:59.290426 1079950 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 02:04:59.293459 1079950 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 02:04:59.297017 1079950 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 02:04:59.297124 1079950 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 02:04:59.318613 1079950 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 02:04:59.318753 1079950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:04:59.377290 1079950 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:04:59.368155656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:04:59.377396 1079950 docker.go:319] overlay module found
	I1208 02:04:59.380580 1079950 out.go:179] * Using the docker driver based on user configuration
	I1208 02:04:59.383694 1079950 start.go:309] selected driver: docker
	I1208 02:04:59.383719 1079950 start.go:927] validating driver "docker" against <nil>
	I1208 02:04:59.383733 1079950 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 02:04:59.384476 1079950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:04:59.437177 1079950 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:04:59.42701923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:04:59.437356 1079950 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 02:04:59.437589 1079950 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 02:04:59.440601 1079950 out.go:179] * Using Docker driver with root privileges
	I1208 02:04:59.443471 1079950 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:04:59.443494 1079950 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 02:04:59.443575 1079950 start.go:353] cluster config:
	{Name:kindnet-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:04:59.446637 1079950 out.go:179] * Starting "kindnet-000739" primary control-plane node in "kindnet-000739" cluster
	I1208 02:04:59.449523 1079950 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 02:04:59.452432 1079950 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 02:04:59.455209 1079950 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:04:59.455253 1079950 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 02:04:59.455278 1079950 cache.go:65] Caching tarball of preloaded images
	I1208 02:04:59.455292 1079950 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 02:04:59.455364 1079950 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 02:04:59.455374 1079950 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 02:04:59.455476 1079950 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/config.json ...
	I1208 02:04:59.455494 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/config.json: {Name:mk8982234c098df5c636ccf3628866a661958fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:04:59.474425 1079950 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 02:04:59.474449 1079950 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 02:04:59.474464 1079950 cache.go:243] Successfully downloaded all kic artifacts
	I1208 02:04:59.474503 1079950 start.go:360] acquireMachinesLock for kindnet-000739: {Name:mk43fdb6c3ae09f956c8acf38159c1dd386d4280 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 02:04:59.474612 1079950 start.go:364] duration metric: took 87.927µs to acquireMachinesLock for "kindnet-000739"
	I1208 02:04:59.474643 1079950 start.go:93] Provisioning new machine with config: &{Name:kindnet-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-000739 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 02:04:59.474722 1079950 start.go:125] createHost starting for "" (driver="docker")
	I1208 02:04:59.478044 1079950 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 02:04:59.478275 1079950 start.go:159] libmachine.API.Create for "kindnet-000739" (driver="docker")
	I1208 02:04:59.478313 1079950 client.go:173] LocalClient.Create starting
	I1208 02:04:59.478380 1079950 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 02:04:59.478428 1079950 main.go:143] libmachine: Decoding PEM data...
	I1208 02:04:59.478451 1079950 main.go:143] libmachine: Parsing certificate...
	I1208 02:04:59.478523 1079950 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 02:04:59.478545 1079950 main.go:143] libmachine: Decoding PEM data...
	I1208 02:04:59.478561 1079950 main.go:143] libmachine: Parsing certificate...
	I1208 02:04:59.478971 1079950 cli_runner.go:164] Run: docker network inspect kindnet-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 02:04:59.494836 1079950 cli_runner.go:211] docker network inspect kindnet-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 02:04:59.494946 1079950 network_create.go:284] running [docker network inspect kindnet-000739] to gather additional debugging logs...
	I1208 02:04:59.494969 1079950 cli_runner.go:164] Run: docker network inspect kindnet-000739
	W1208 02:04:59.516568 1079950 cli_runner.go:211] docker network inspect kindnet-000739 returned with exit code 1
	I1208 02:04:59.516605 1079950 network_create.go:287] error running [docker network inspect kindnet-000739]: docker network inspect kindnet-000739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-000739 not found
	I1208 02:04:59.516618 1079950 network_create.go:289] output of [docker network inspect kindnet-000739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-000739 not found
	
	** /stderr **
	I1208 02:04:59.516717 1079950 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:04:59.534332 1079950 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 02:04:59.534661 1079950 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 02:04:59.535148 1079950 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 02:04:59.535456 1079950 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 02:04:59.535869 1079950 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f9740}
	I1208 02:04:59.535886 1079950 network_create.go:124] attempt to create docker network kindnet-000739 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 02:04:59.535943 1079950 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-000739 kindnet-000739
	I1208 02:04:59.591628 1079950 network_create.go:108] docker network kindnet-000739 192.168.85.0/24 created
	I1208 02:04:59.591658 1079950 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-000739" container
	I1208 02:04:59.591731 1079950 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 02:04:59.608209 1079950 cli_runner.go:164] Run: docker volume create kindnet-000739 --label name.minikube.sigs.k8s.io=kindnet-000739 --label created_by.minikube.sigs.k8s.io=true
	I1208 02:04:59.625337 1079950 oci.go:103] Successfully created a docker volume kindnet-000739
	I1208 02:04:59.625423 1079950 cli_runner.go:164] Run: docker run --rm --name kindnet-000739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-000739 --entrypoint /usr/bin/test -v kindnet-000739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 02:05:00.347072 1079950 oci.go:107] Successfully prepared a docker volume kindnet-000739
	I1208 02:05:00.347151 1079950 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:05:00.347162 1079950 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 02:05:00.347249 1079950 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-000739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 02:05:04.388582 1079950 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-000739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.041287868s)
	I1208 02:05:04.388632 1079950 kic.go:203] duration metric: took 4.041450659s to extract preloaded images to volume ...
	W1208 02:05:04.388787 1079950 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 02:05:04.388914 1079950 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 02:05:04.456336 1079950 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-000739 --name kindnet-000739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-000739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-000739 --network kindnet-000739 --ip 192.168.85.2 --volume kindnet-000739:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 02:05:04.704627 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Running}}
	I1208 02:05:04.728576 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:04.758367 1079950 cli_runner.go:164] Run: docker exec kindnet-000739 stat /var/lib/dpkg/alternatives/iptables
	I1208 02:05:04.826939 1079950 oci.go:144] the created container "kindnet-000739" has a running status.
	I1208 02:05:04.826978 1079950 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa...
	I1208 02:05:05.223577 1079950 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 02:05:05.247654 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:05.268949 1079950 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 02:05:05.268972 1079950 kic_runner.go:114] Args: [docker exec --privileged kindnet-000739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 02:05:05.328473 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:05.347944 1079950 machine.go:94] provisionDockerMachine start ...
	I1208 02:05:05.348049 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:05.364878 1079950 main.go:143] libmachine: Using SSH client type: native
	I1208 02:05:05.365221 1079950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1208 02:05:05.365237 1079950 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 02:05:05.365929 1079950 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 02:05:08.519211 1079950 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-000739
	
	I1208 02:05:08.519234 1079950 ubuntu.go:182] provisioning hostname "kindnet-000739"
	I1208 02:05:08.519313 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:08.543632 1079950 main.go:143] libmachine: Using SSH client type: native
	I1208 02:05:08.543947 1079950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1208 02:05:08.543959 1079950 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-000739 && echo "kindnet-000739" | sudo tee /etc/hostname
	I1208 02:05:08.707966 1079950 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-000739
	
	I1208 02:05:08.708054 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:08.725032 1079950 main.go:143] libmachine: Using SSH client type: native
	I1208 02:05:08.725349 1079950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1208 02:05:08.725370 1079950 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-000739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-000739/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-000739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 02:05:08.874977 1079950 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 02:05:08.875102 1079950 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 02:05:08.875154 1079950 ubuntu.go:190] setting up certificates
	I1208 02:05:08.875179 1079950 provision.go:84] configureAuth start
	I1208 02:05:08.875272 1079950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-000739
	I1208 02:05:08.892474 1079950 provision.go:143] copyHostCerts
	I1208 02:05:08.892546 1079950 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 02:05:08.892560 1079950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 02:05:08.892651 1079950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 02:05:08.892749 1079950 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 02:05:08.892760 1079950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 02:05:08.892787 1079950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 02:05:08.892845 1079950 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 02:05:08.892860 1079950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 02:05:08.892886 1079950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 02:05:08.892937 1079950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.kindnet-000739 san=[127.0.0.1 192.168.85.2 kindnet-000739 localhost minikube]
	I1208 02:05:09.221669 1079950 provision.go:177] copyRemoteCerts
	I1208 02:05:09.221736 1079950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 02:05:09.221778 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:09.239489 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:09.347444 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 02:05:09.365726 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1208 02:05:09.383346 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 02:05:09.400647 1079950 provision.go:87] duration metric: took 525.430485ms to configureAuth
	I1208 02:05:09.400683 1079950 ubuntu.go:206] setting minikube options for container-runtime
	I1208 02:05:09.400923 1079950 config.go:182] Loaded profile config "kindnet-000739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 02:05:09.401043 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:09.418136 1079950 main.go:143] libmachine: Using SSH client type: native
	I1208 02:05:09.418455 1079950 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1208 02:05:09.418475 1079950 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 02:05:09.722613 1079950 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 02:05:09.722642 1079950 machine.go:97] duration metric: took 4.374677099s to provisionDockerMachine
	I1208 02:05:09.722653 1079950 client.go:176] duration metric: took 10.244329267s to LocalClient.Create
	I1208 02:05:09.722666 1079950 start.go:167] duration metric: took 10.244392102s to libmachine.API.Create "kindnet-000739"
	I1208 02:05:09.722673 1079950 start.go:293] postStartSetup for "kindnet-000739" (driver="docker")
	I1208 02:05:09.722684 1079950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 02:05:09.722743 1079950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 02:05:09.722786 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:09.741875 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:09.850822 1079950 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 02:05:09.854276 1079950 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 02:05:09.854302 1079950 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 02:05:09.854314 1079950 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 02:05:09.854376 1079950 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 02:05:09.854448 1079950 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 02:05:09.854552 1079950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 02:05:09.861988 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 02:05:09.883620 1079950 start.go:296] duration metric: took 160.930797ms for postStartSetup
	I1208 02:05:09.884028 1079950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-000739
	I1208 02:05:09.901294 1079950 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/config.json ...
	I1208 02:05:09.901572 1079950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 02:05:09.901625 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:09.918289 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:10.021555 1079950 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 02:05:10.028057 1079950 start.go:128] duration metric: took 10.55331968s to createHost
	I1208 02:05:10.028084 1079950 start.go:83] releasing machines lock for "kindnet-000739", held for 10.553458094s
	I1208 02:05:10.028161 1079950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-000739
	I1208 02:05:10.049463 1079950 ssh_runner.go:195] Run: cat /version.json
	I1208 02:05:10.049520 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:10.049837 1079950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 02:05:10.049893 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:10.086654 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:10.086647 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:10.280675 1079950 ssh_runner.go:195] Run: systemctl --version
	I1208 02:05:10.287369 1079950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 02:05:10.329022 1079950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 02:05:10.333481 1079950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 02:05:10.333558 1079950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 02:05:10.362925 1079950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 02:05:10.362952 1079950 start.go:496] detecting cgroup driver to use...
	I1208 02:05:10.362986 1079950 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 02:05:10.363040 1079950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 02:05:10.382150 1079950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 02:05:10.394535 1079950 docker.go:218] disabling cri-docker service (if available) ...
	I1208 02:05:10.394639 1079950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 02:05:10.412289 1079950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 02:05:10.430958 1079950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 02:05:10.542243 1079950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 02:05:10.661237 1079950 docker.go:234] disabling docker service ...
	I1208 02:05:10.661347 1079950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 02:05:10.681817 1079950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 02:05:10.695242 1079950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 02:05:10.828201 1079950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 02:05:10.952224 1079950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 02:05:10.966029 1079950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 02:05:10.981283 1079950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 02:05:10.981351 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:10.990558 1079950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 02:05:10.990622 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.003282 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.013281 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.022630 1079950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 02:05:11.031818 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.041377 1079950 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.055585 1079950 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:05:11.064879 1079950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 02:05:11.072981 1079950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 02:05:11.080809 1079950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:05:11.196276 1079950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 02:05:11.373224 1079950 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 02:05:11.373311 1079950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 02:05:11.377328 1079950 start.go:564] Will wait 60s for crictl version
	I1208 02:05:11.377417 1079950 ssh_runner.go:195] Run: which crictl
	I1208 02:05:11.381105 1079950 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 02:05:11.409464 1079950 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 02:05:11.409575 1079950 ssh_runner.go:195] Run: crio --version
	I1208 02:05:11.437683 1079950 ssh_runner.go:195] Run: crio --version
	I1208 02:05:11.472779 1079950 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 02:05:11.475574 1079950 cli_runner.go:164] Run: docker network inspect kindnet-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:05:11.492298 1079950 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 02:05:11.496093 1079950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:05:11.507029 1079950 kubeadm.go:884] updating cluster {Name:kindnet-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 02:05:11.507196 1079950 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:05:11.507262 1079950 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:05:11.566987 1079950 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 02:05:11.567008 1079950 crio.go:433] Images already preloaded, skipping extraction
	I1208 02:05:11.567076 1079950 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:05:11.596919 1079950 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 02:05:11.596942 1079950 cache_images.go:86] Images are preloaded, skipping loading
	I1208 02:05:11.596950 1079950 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 02:05:11.597034 1079950 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-000739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1208 02:05:11.597115 1079950 ssh_runner.go:195] Run: crio config
	I1208 02:05:11.671398 1079950 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:05:11.671432 1079950 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 02:05:11.671457 1079950 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-000739 NodeName:kindnet-000739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 02:05:11.671585 1079950 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-000739"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 02:05:11.671662 1079950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 02:05:11.679271 1079950 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 02:05:11.679393 1079950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 02:05:11.686932 1079950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1208 02:05:11.699782 1079950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 02:05:11.712548 1079950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1208 02:05:11.724654 1079950 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 02:05:11.728142 1079950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:05:11.738013 1079950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:05:11.858460 1079950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:05:11.875374 1079950 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739 for IP: 192.168.85.2
	I1208 02:05:11.875399 1079950 certs.go:195] generating shared ca certs ...
	I1208 02:05:11.875415 1079950 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:11.875557 1079950 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 02:05:11.875606 1079950 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 02:05:11.875618 1079950 certs.go:257] generating profile certs ...
	I1208 02:05:11.875674 1079950 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.key
	I1208 02:05:11.875690 1079950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.crt with IP's: []
	I1208 02:05:12.211473 1079950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.crt ...
	I1208 02:05:12.211509 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.crt: {Name:mkc615d5edc00a36a80988c10fabcb9eb87c8bc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.211730 1079950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.key ...
	I1208 02:05:12.211745 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/client.key: {Name:mkc2d7e9c3e44501f8579f13769b62e2e0fcb96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.211837 1079950 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key.b34b4d8f
	I1208 02:05:12.211858 1079950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt.b34b4d8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 02:05:12.470685 1079950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt.b34b4d8f ...
	I1208 02:05:12.470720 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt.b34b4d8f: {Name:mkdf56f11acedbfef9bd6c534561144b205e7b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.470933 1079950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key.b34b4d8f ...
	I1208 02:05:12.470951 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key.b34b4d8f: {Name:mk8b7331c251b02329afa997a904d307b66fb61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.471034 1079950 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt.b34b4d8f -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt
	I1208 02:05:12.471127 1079950 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key.b34b4d8f -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key
	I1208 02:05:12.471186 1079950 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.key
	I1208 02:05:12.471206 1079950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.crt with IP's: []
	I1208 02:05:12.616494 1079950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.crt ...
	I1208 02:05:12.616524 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.crt: {Name:mkfe70157ead017c84b7ebca82043b795d0e41ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.616698 1079950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.key ...
	I1208 02:05:12.616712 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.key: {Name:mk0757ebedde9bb91a4a0336bd6903900a540a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:12.616935 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 02:05:12.616984 1079950 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 02:05:12.616998 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 02:05:12.617027 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 02:05:12.617057 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 02:05:12.617086 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 02:05:12.617138 1079950 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 02:05:12.617711 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 02:05:12.637236 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 02:05:12.655998 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 02:05:12.674955 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 02:05:12.692853 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 02:05:12.710448 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 02:05:12.728769 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 02:05:12.746489 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/kindnet-000739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 02:05:12.768114 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 02:05:12.789500 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 02:05:12.809687 1079950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 02:05:12.829410 1079950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 02:05:12.842884 1079950 ssh_runner.go:195] Run: openssl version
	I1208 02:05:12.849285 1079950 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 02:05:12.856801 1079950 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 02:05:12.864418 1079950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 02:05:12.868260 1079950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 02:05:12.868327 1079950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 02:05:12.909072 1079950 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 02:05:12.916510 1079950 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 02:05:12.923926 1079950 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 02:05:12.931543 1079950 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 02:05:12.939268 1079950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 02:05:12.943162 1079950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 02:05:12.943274 1079950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 02:05:12.984596 1079950 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 02:05:12.992128 1079950 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 02:05:13.000637 1079950 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:05:13.010326 1079950 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 02:05:13.020443 1079950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:05:13.025062 1079950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:05:13.025135 1079950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:05:13.066677 1079950 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 02:05:13.074243 1079950 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 02:05:13.081481 1079950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 02:05:13.085063 1079950 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 02:05:13.085125 1079950 kubeadm.go:401] StartCluster: {Name:kindnet-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:05:13.085199 1079950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 02:05:13.085258 1079950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 02:05:13.113598 1079950 cri.go:89] found id: ""
	I1208 02:05:13.113679 1079950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 02:05:13.121405 1079950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 02:05:13.128992 1079950 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 02:05:13.129057 1079950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 02:05:13.136755 1079950 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 02:05:13.136774 1079950 kubeadm.go:158] found existing configuration files:
	
	I1208 02:05:13.136824 1079950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 02:05:13.144221 1079950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 02:05:13.144338 1079950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 02:05:13.151683 1079950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 02:05:13.159382 1079950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 02:05:13.159476 1079950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 02:05:13.167006 1079950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 02:05:13.174508 1079950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 02:05:13.174576 1079950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 02:05:13.182053 1079950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 02:05:13.189606 1079950 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 02:05:13.189681 1079950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 02:05:13.196868 1079950 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 02:05:13.245730 1079950 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 02:05:13.246046 1079950 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 02:05:13.268219 1079950 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 02:05:13.268293 1079950 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 02:05:13.268348 1079950 kubeadm.go:319] OS: Linux
	I1208 02:05:13.268399 1079950 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 02:05:13.268451 1079950 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 02:05:13.268503 1079950 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 02:05:13.268554 1079950 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 02:05:13.268606 1079950 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 02:05:13.268657 1079950 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 02:05:13.268706 1079950 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 02:05:13.268757 1079950 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 02:05:13.268807 1079950 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 02:05:13.333982 1079950 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 02:05:13.334099 1079950 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 02:05:13.334193 1079950 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 02:05:13.341822 1079950 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 02:05:13.348637 1079950 out.go:252]   - Generating certificates and keys ...
	I1208 02:05:13.348752 1079950 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 02:05:13.348844 1079950 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 02:05:13.985360 1079950 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 02:05:14.265060 1079950 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 02:05:15.076976 1079950 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 02:05:15.611857 1079950 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 02:05:16.020850 1079950 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 02:05:16.021418 1079950 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-000739 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 02:05:17.279390 1079950 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 02:05:17.279681 1079950 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-000739 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 02:05:17.687817 1079950 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 02:05:18.015198 1079950 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 02:05:18.648469 1079950 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 02:05:18.648776 1079950 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 02:05:18.724183 1079950 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 02:05:19.042323 1079950 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 02:05:19.455374 1079950 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 02:05:20.339477 1079950 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 02:05:20.995914 1079950 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 02:05:20.996512 1079950 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 02:05:21.000845 1079950 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 02:05:21.004985 1079950 out.go:252]   - Booting up control plane ...
	I1208 02:05:21.005097 1079950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 02:05:21.005175 1079950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 02:05:21.006996 1079950 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 02:05:21.029257 1079950 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 02:05:21.029366 1079950 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 02:05:21.037877 1079950 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 02:05:21.038203 1079950 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 02:05:21.038250 1079950 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 02:05:21.171316 1079950 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 02:05:21.171437 1079950 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 02:05:23.170767 1079950 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001471018s
	I1208 02:05:23.174164 1079950 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 02:05:23.174266 1079950 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1208 02:05:23.174520 1079950 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 02:05:23.174611 1079950 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 02:05:27.919140 1079950 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.744427675s
	I1208 02:05:28.947243 1079950 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.773014766s
	I1208 02:05:30.676429 1079950 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.501998929s
	I1208 02:05:30.708657 1079950 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 02:05:30.724522 1079950 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 02:05:30.738950 1079950 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 02:05:30.739179 1079950 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-000739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 02:05:30.750933 1079950 kubeadm.go:319] [bootstrap-token] Using token: dauggj.vrtxsp3nkda1vus1
	I1208 02:05:30.753894 1079950 out.go:252]   - Configuring RBAC rules ...
	I1208 02:05:30.754027 1079950 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 02:05:30.758082 1079950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 02:05:30.766989 1079950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 02:05:30.772977 1079950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 02:05:30.777279 1079950 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 02:05:30.781294 1079950 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 02:05:31.085560 1079950 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 02:05:31.519817 1079950 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 02:05:32.086197 1079950 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 02:05:32.087676 1079950 kubeadm.go:319] 
	I1208 02:05:32.087748 1079950 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 02:05:32.087754 1079950 kubeadm.go:319] 
	I1208 02:05:32.087831 1079950 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 02:05:32.087835 1079950 kubeadm.go:319] 
	I1208 02:05:32.087860 1079950 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 02:05:32.087919 1079950 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 02:05:32.087969 1079950 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 02:05:32.087973 1079950 kubeadm.go:319] 
	I1208 02:05:32.088038 1079950 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 02:05:32.088044 1079950 kubeadm.go:319] 
	I1208 02:05:32.088091 1079950 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 02:05:32.088095 1079950 kubeadm.go:319] 
	I1208 02:05:32.088146 1079950 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 02:05:32.088221 1079950 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 02:05:32.088289 1079950 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 02:05:32.088293 1079950 kubeadm.go:319] 
	I1208 02:05:32.088376 1079950 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 02:05:32.088454 1079950 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 02:05:32.088457 1079950 kubeadm.go:319] 
	I1208 02:05:32.088541 1079950 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dauggj.vrtxsp3nkda1vus1 \
	I1208 02:05:32.088645 1079950 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 \
	I1208 02:05:32.088666 1079950 kubeadm.go:319] 	--control-plane 
	I1208 02:05:32.088669 1079950 kubeadm.go:319] 
	I1208 02:05:32.088754 1079950 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 02:05:32.088758 1079950 kubeadm.go:319] 
	I1208 02:05:32.088839 1079950 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dauggj.vrtxsp3nkda1vus1 \
	I1208 02:05:32.088941 1079950 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2f7dd7a166e4dd1a4abc3c3e624c9bfa04a77c3c319e242f9f9a9a49ac55d954 
	I1208 02:05:32.092334 1079950 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 02:05:32.092565 1079950 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 02:05:32.092675 1079950 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 02:05:32.092696 1079950 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:05:32.095812 1079950 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 02:05:32.098873 1079950 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 02:05:32.103033 1079950 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 02:05:32.103057 1079950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 02:05:32.118435 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 02:05:32.437092 1079950 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 02:05:32.437234 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:32.437306 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-000739 minikube.k8s.io/updated_at=2025_12_08T02_05_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=kindnet-000739 minikube.k8s.io/primary=true
	I1208 02:05:32.460254 1079950 ops.go:34] apiserver oom_adj: -16
	I1208 02:05:32.638395 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:33.138992 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:33.638575 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:34.139045 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:34.639133 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:35.138572 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:35.639259 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:36.139111 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:36.638449 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:37.138555 1079950 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:05:37.222487 1079950 kubeadm.go:1114] duration metric: took 4.785299371s to wait for elevateKubeSystemPrivileges
	I1208 02:05:37.222514 1079950 kubeadm.go:403] duration metric: took 24.137394135s to StartCluster
	I1208 02:05:37.222534 1079950 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:37.222594 1079950 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 02:05:37.223628 1079950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:05:37.223863 1079950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 02:05:37.223870 1079950 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 02:05:37.224166 1079950 config.go:182] Loaded profile config "kindnet-000739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 02:05:37.224214 1079950 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 02:05:37.224282 1079950 addons.go:70] Setting storage-provisioner=true in profile "kindnet-000739"
	I1208 02:05:37.224300 1079950 addons.go:239] Setting addon storage-provisioner=true in "kindnet-000739"
	I1208 02:05:37.224327 1079950 host.go:66] Checking if "kindnet-000739" exists ...
	I1208 02:05:37.224867 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:37.225401 1079950 addons.go:70] Setting default-storageclass=true in profile "kindnet-000739"
	I1208 02:05:37.225435 1079950 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-000739"
	I1208 02:05:37.225738 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:37.228270 1079950 out.go:179] * Verifying Kubernetes components...
	I1208 02:05:37.239031 1079950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:05:37.254191 1079950 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 02:05:37.262148 1079950 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 02:05:37.262178 1079950 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 02:05:37.262253 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:37.273828 1079950 addons.go:239] Setting addon default-storageclass=true in "kindnet-000739"
	I1208 02:05:37.273879 1079950 host.go:66] Checking if "kindnet-000739" exists ...
	I1208 02:05:37.274307 1079950 cli_runner.go:164] Run: docker container inspect kindnet-000739 --format={{.State.Status}}
	I1208 02:05:37.322781 1079950 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 02:05:37.322806 1079950 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 02:05:37.322899 1079950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-000739
	I1208 02:05:37.323152 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:37.368454 1079950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/kindnet-000739/id_rsa Username:docker}
	I1208 02:05:37.503214 1079950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 02:05:37.533854 1079950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:05:37.571873 1079950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 02:05:37.704302 1079950 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 02:05:38.163870 1079950 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1208 02:05:38.165884 1079950 node_ready.go:35] waiting up to 15m0s for node "kindnet-000739" to be "Ready" ...
	I1208 02:05:38.381940 1079950 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1208 02:05:38.384932 1079950 addons.go:530] duration metric: took 1.160706851s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1208 02:05:38.667798 1079950 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-000739" context rescaled to 1 replicas
	W1208 02:05:40.169577 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:42.176577 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:44.669683 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:46.669970 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:48.670419 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:51.170314 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:53.669472 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:55.669682 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	W1208 02:05:57.670391 1079950 node_ready.go:57] node "kindnet-000739" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072876779Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072883992Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072889867Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072896496Z" level=info msg="RDT not available in the host system"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072909567Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073778565Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073798208Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073814225Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074485379Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074501871Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074630463Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.075394984Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07576115Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07584312Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123803822Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123963487Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124019635Z" level=info msg="Create NRI interface"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124120732Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124136092Z" level=info msg="runtime interface created"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124147144Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124154217Z" level=info msg="runtime interface starting up..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124160937Z" level=info msg="starting plugins..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124173171Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124229549Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:50:57 no-preload-389831 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:06:02.350324    8154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:06:02.350974    8154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:06:02.352548    8154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:06:02.353040    8154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:06:02.354609    8154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:03] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:05] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:06:02 up  6:48,  0 user,  load average: 1.75, 1.23, 1.23
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:05:59 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:06:00 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 08 02:06:00 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:00 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:00 no-preload-389831 kubelet[8020]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:00 no-preload-389831 kubelet[8020]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:00 no-preload-389831 kubelet[8020]: E1208 02:06:00.355621    8020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:06:00 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:06:00 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:01 no-preload-389831 kubelet[8026]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:01 no-preload-389831 kubelet[8026]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:01 no-preload-389831 kubelet[8026]: E1208 02:06:01.076581    8026 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:01 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:06:01 no-preload-389831 kubelet[8061]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:01 no-preload-389831 kubelet[8061]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:06:01 no-preload-389831 kubelet[8061]: E1208 02:06:01.831037    8061 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:06:01 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 2 (382.105108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-448023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (328.91ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-448023 -n newest-cni-448023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (317.799626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-448023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (321.846445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-448023 -n newest-cni-448023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (325.404602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-448023
helpers_test.go:243: (dbg) docker inspect newest-cni-448023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	        "Created": "2025-12-08T01:46:34.353152924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1055155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:56:41.277432033Z",
	            "FinishedAt": "2025-12-08T01:56:39.892982826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hosts",
	        "LogPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9-json.log",
	        "Name": "/newest-cni-448023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-448023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-448023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	                "LowerDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-448023",
	                "Source": "/var/lib/docker/volumes/newest-cni-448023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-448023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-448023",
	                "name.minikube.sigs.k8s.io": "newest-cni-448023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "813118b42480babba062786ba0ba8ff3e7452eec7c2d8f800688d8fd68359617",
	            "SandboxKey": "/var/run/docker/netns/813118b42480",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-448023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:9d:8d:8a:21:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec5af7f0fdbc70a95f83d97d8a04145286c7acd7e864f0f850cd22983b469ab7",
	                    "EndpointID": "577f657908aa7f309cdfc5d98526f00d0b1c5b25cb769be3035b9f923a1c6bf3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-448023",
	                        "ff1a1ad3010f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (340.761527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25: (1.802233471s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:54 UTC │                     │
	│ stop    │ -p newest-cni-448023 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p newest-cni-448023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │                     │
	│ image   │ newest-cni-448023 image list --format=json                                                                                                                                                                                                           │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	│ pause   │ -p newest-cni-448023 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	│ unpause │ -p newest-cni-448023 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:56:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:56:40.995814 1055021 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:56:40.995993 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996024 1055021 out.go:374] Setting ErrFile to fd 2...
	I1208 01:56:40.996044 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996297 1055021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:56:40.996698 1055021 out.go:368] Setting JSON to false
	I1208 01:56:40.997651 1055021 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23933,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:56:40.997760 1055021 start.go:143] virtualization:  
	I1208 01:56:41.000930 1055021 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:56:41.005767 1055021 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:56:41.005958 1055021 notify.go:221] Checking for updates...
	I1208 01:56:41.009547 1055021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:56:41.012698 1055021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:41.016029 1055021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:56:41.019114 1055021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:56:41.022081 1055021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:56:41.025425 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:41.026092 1055021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:56:41.062956 1055021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:56:41.063137 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.133740 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.124579493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.133841 1055021 docker.go:319] overlay module found
	I1208 01:56:41.136922 1055021 out.go:179] * Using the docker driver based on existing profile
	I1208 01:56:41.139812 1055021 start.go:309] selected driver: docker
	I1208 01:56:41.139836 1055021 start.go:927] validating driver "docker" against &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.139955 1055021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:56:41.140671 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.193763 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.183682659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.194162 1055021 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:56:41.194196 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:41.194260 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:41.194313 1055021 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.197698 1055021 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:56:41.200489 1055021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:56:41.203470 1055021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:56:41.206341 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:41.206393 1055021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:56:41.206406 1055021 cache.go:65] Caching tarball of preloaded images
	I1208 01:56:41.206414 1055021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:56:41.206514 1055021 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:56:41.206524 1055021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:56:41.206659 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.226393 1055021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:56:41.226417 1055021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:56:41.226437 1055021 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:56:41.226470 1055021 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:56:41.226539 1055021 start.go:364] duration metric: took 45.818µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:56:41.226562 1055021 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:56:41.226569 1055021 fix.go:54] fixHost starting: 
	I1208 01:56:41.226872 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.244524 1055021 fix.go:112] recreateIfNeeded on newest-cni-448023: state=Stopped err=<nil>
	W1208 01:56:41.244564 1055021 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:56:42.018560 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:44.518581 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:41.247746 1055021 out.go:252] * Restarting existing docker container for "newest-cni-448023" ...
	I1208 01:56:41.247847 1055021 cli_runner.go:164] Run: docker start newest-cni-448023
	I1208 01:56:41.505835 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.523362 1055021 kic.go:430] container "newest-cni-448023" state is running.
	I1208 01:56:41.523773 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:41.545536 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.545777 1055021 machine.go:94] provisionDockerMachine start ...
	I1208 01:56:41.545848 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:41.570998 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:41.571328 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:41.571336 1055021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:56:41.572041 1055021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:56:44.722629 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.722658 1055021 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:56:44.722733 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.743562 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.743889 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.743906 1055021 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:56:44.912657 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.912755 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.930550 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.930902 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.930926 1055021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:56:45.125086 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:56:45.125166 1055021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:56:45.125215 1055021 ubuntu.go:190] setting up certificates
	I1208 01:56:45.125242 1055021 provision.go:84] configureAuth start
	I1208 01:56:45.125340 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:45.146934 1055021 provision.go:143] copyHostCerts
	I1208 01:56:45.147071 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:56:45.147086 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:56:45.147185 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:56:45.147315 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:56:45.147333 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:56:45.147379 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:56:45.147450 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:56:45.147463 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:56:45.147494 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:56:45.147561 1055021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:56:45.319641 1055021 provision.go:177] copyRemoteCerts
	I1208 01:56:45.319718 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:56:45.319771 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.338151 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.446957 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:56:45.464534 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:56:45.481634 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:56:45.499110 1055021 provision.go:87] duration metric: took 373.83191ms to configureAuth
	I1208 01:56:45.499137 1055021 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:56:45.499354 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:45.499462 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.519312 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:45.520323 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:45.520348 1055021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:56:45.838649 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:56:45.838675 1055021 machine.go:97] duration metric: took 4.292880237s to provisionDockerMachine
	I1208 01:56:45.838688 1055021 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:56:45.838701 1055021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:56:45.838764 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:56:45.838808 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.856107 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.962864 1055021 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:56:45.966280 1055021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:56:45.966310 1055021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:56:45.966321 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:56:45.966376 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:56:45.966455 1055021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:56:45.966565 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:56:45.973812 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:45.990960 1055021 start.go:296] duration metric: took 152.256258ms for postStartSetup
	I1208 01:56:45.991062 1055021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:56:45.991102 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.010295 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.111994 1055021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:56:46.116921 1055021 fix.go:56] duration metric: took 4.890342951s for fixHost
	I1208 01:56:46.116949 1055021 start.go:83] releasing machines lock for "newest-cni-448023", held for 4.89039814s
	I1208 01:56:46.117023 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:46.133998 1055021 ssh_runner.go:195] Run: cat /version.json
	I1208 01:56:46.134053 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.134086 1055021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:56:46.134143 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.155007 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.157578 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.259943 1055021 ssh_runner.go:195] Run: systemctl --version
	I1208 01:56:46.363782 1055021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:56:46.401418 1055021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:56:46.405895 1055021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:56:46.406027 1055021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:56:46.414120 1055021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:56:46.414145 1055021 start.go:496] detecting cgroup driver to use...
	I1208 01:56:46.414178 1055021 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:56:46.414240 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:56:46.430116 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:56:46.443306 1055021 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:56:46.443370 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:56:46.459228 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:56:46.472250 1055021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:56:46.583643 1055021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:56:46.702836 1055021 docker.go:234] disabling docker service ...
	I1208 01:56:46.702974 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:56:46.718081 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:56:46.731165 1055021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:56:46.841278 1055021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:56:46.959396 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:56:46.972986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:56:46.988672 1055021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:56:46.988773 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:46.998541 1055021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:56:46.998635 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.012333 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.022719 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.033036 1055021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:56:47.042410 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.053356 1055021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.066055 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.076106 1055021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:56:47.083610 1055021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:56:47.090937 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.204760 1055021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:56:47.377268 1055021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:56:47.377383 1055021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:56:47.381048 1055021 start.go:564] Will wait 60s for crictl version
	I1208 01:56:47.381161 1055021 ssh_runner.go:195] Run: which crictl
	I1208 01:56:47.384529 1055021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:56:47.407415 1055021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:56:47.407590 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.438310 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.480028 1055021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:56:47.482931 1055021 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:56:47.498300 1055021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:56:47.502114 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.515024 1055021 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:56:47.517850 1055021 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:56:47.518007 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:47.518083 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.554783 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.554810 1055021 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:56:47.554891 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.580370 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.580396 1055021 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:56:47.580404 1055021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:56:47.580497 1055021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:56:47.580581 1055021 ssh_runner.go:195] Run: crio config
	I1208 01:56:47.630652 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:47.630677 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:47.630697 1055021 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:56:47.630720 1055021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:56:47.630943 1055021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:56:47.631027 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:56:47.638867 1055021 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:56:47.638960 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:56:47.646535 1055021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:56:47.659466 1055021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:56:47.672488 1055021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:56:47.685612 1055021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:56:47.689373 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.699289 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.852921 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:47.877101 1055021 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:56:47.877130 1055021 certs.go:195] generating shared ca certs ...
	I1208 01:56:47.877147 1055021 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:47.877305 1055021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:56:47.877358 1055021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:56:47.877370 1055021 certs.go:257] generating profile certs ...
	I1208 01:56:47.877482 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:56:47.877551 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:56:47.877603 1055021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:56:47.877731 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:56:47.877771 1055021 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:56:47.877792 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:56:47.877831 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:56:47.877859 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:56:47.877890 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:56:47.877943 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:47.879217 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:56:47.903514 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:56:47.922072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:56:47.939555 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:56:47.956891 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:56:47.976072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:56:47.994485 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:56:48.016256 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:56:48.036003 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:56:48.058425 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:56:48.078107 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:56:48.096426 1055021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:56:48.110183 1055021 ssh_runner.go:195] Run: openssl version
	I1208 01:56:48.117292 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.125194 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:56:48.133030 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136789 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136880 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.178238 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:56:48.186394 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.194429 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:56:48.203481 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207582 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207655 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.249053 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:56:48.257115 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.265010 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:56:48.272913 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276751 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276818 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.318199 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:56:48.326277 1055021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:56:48.330322 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:56:48.371576 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:56:48.412414 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:56:48.454546 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:56:48.499800 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:56:48.544265 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:56:48.590374 1055021 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:48.590473 1055021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:56:48.590547 1055021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:56:48.619202 1055021 cri.go:89] found id: ""
	I1208 01:56:48.619330 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:56:48.627096 1055021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:56:48.627120 1055021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:56:48.627172 1055021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:56:48.634458 1055021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:56:48.635058 1055021 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.635319 1055021 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-448023" cluster setting kubeconfig missing "newest-cni-448023" context setting]
	I1208 01:56:48.635800 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.637157 1055021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:56:48.644838 1055021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:56:48.644913 1055021 kubeadm.go:602] duration metric: took 17.785882ms to restartPrimaryControlPlane
	I1208 01:56:48.644930 1055021 kubeadm.go:403] duration metric: took 54.567759ms to StartCluster
	I1208 01:56:48.644947 1055021 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.645007 1055021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.645870 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.646084 1055021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:56:48.646389 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:48.646439 1055021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:56:48.646504 1055021 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-448023"
	I1208 01:56:48.646529 1055021 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-448023"
	I1208 01:56:48.646555 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647285 1055021 addons.go:70] Setting dashboard=true in profile "newest-cni-448023"
	I1208 01:56:48.647305 1055021 addons.go:239] Setting addon dashboard=true in "newest-cni-448023"
	W1208 01:56:48.647311 1055021 addons.go:248] addon dashboard should already be in state true
	I1208 01:56:48.647331 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.647957 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.648448 1055021 addons.go:70] Setting default-storageclass=true in profile "newest-cni-448023"
	I1208 01:56:48.648476 1055021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-448023"
	I1208 01:56:48.648734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.651945 1055021 out.go:179] * Verifying Kubernetes components...
	I1208 01:56:48.654867 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:48.684864 1055021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:56:48.691009 1055021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:56:48.694226 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:56:48.694251 1055021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:56:48.694323 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.695436 1055021 addons.go:239] Setting addon default-storageclass=true in "newest-cni-448023"
	I1208 01:56:48.695482 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.695884 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.701699 1055021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1208 01:56:47.019431 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:49.518464 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:48.704558 1055021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.704591 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:56:48.704655 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.736846 1055021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.736869 1055021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:56:48.736936 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.742543 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.766983 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.785430 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.885046 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:48.955470 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:56:48.955498 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:56:48.963459 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.965887 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.978338 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:56:48.978366 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:56:49.016188 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:56:49.016210 1055021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:56:49.061303 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:56:49.061328 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:56:49.074921 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:56:49.074987 1055021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:56:49.087412 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:56:49.087487 1055021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:56:49.099641 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:56:49.099667 1055021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:56:49.112487 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:56:49.112550 1055021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:56:49.125264 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.125288 1055021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:56:49.138335 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.508759 1055021 api_server.go:52] waiting for apiserver process to appear ...
	W1208 01:56:49.508918 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509385 1055021 retry.go:31] will retry after 199.05184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509006 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509406 1055021 retry.go:31] will retry after 322.784094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509263 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509418 1055021 retry.go:31] will retry after 353.691521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509538 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:49.709327 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:49.771304 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.771383 1055021 retry.go:31] will retry after 463.845922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.832454 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:49.863948 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:49.893225 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.893260 1055021 retry.go:31] will retry after 412.627767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.933504 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.933538 1055021 retry.go:31] will retry after 461.252989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.009945 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.235907 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:50.306466 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:50.322038 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.322071 1055021 retry.go:31] will retry after 523.830022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:50.380008 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.380051 1055021 retry.go:31] will retry after 753.154513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.395255 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:50.456642 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.456676 1055021 retry.go:31] will retry after 803.433098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.509737 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.846838 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:50.908365 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.908408 1055021 retry.go:31] will retry after 671.521026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.519391 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:54.018689 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:51.009996 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.134042 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.192423 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.192455 1055021 retry.go:31] will retry after 689.227768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.260665 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:51.319134 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.319182 1055021 retry.go:31] will retry after 541.526321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.509442 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.580384 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:51.640452 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.640485 1055021 retry.go:31] will retry after 844.977075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.861863 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:51.882351 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.944280 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.944321 1055021 retry.go:31] will retry after 1.000499188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.967122 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.967155 1055021 retry.go:31] will retry after 859.890122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.010305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:52.486447 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:52.510056 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:56:52.585753 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.585816 1055021 retry.go:31] will retry after 1.004705222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.828167 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:52.886091 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.886122 1055021 retry.go:31] will retry after 2.82316744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.945292 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:53.006627 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.006710 1055021 retry.go:31] will retry after 2.04955933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.009824 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.510073 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.591501 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:53.650678 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.650712 1055021 retry.go:31] will retry after 3.502569911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:54.010159 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:54.509667 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.009590 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.057336 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:55.132269 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.132307 1055021 retry.go:31] will retry after 2.513983979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.509439 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.710171 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:55.769058 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.769091 1055021 retry.go:31] will retry after 2.669645777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:56.518414 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:58.518521 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:59.018412 1047159 node_ready.go:38] duration metric: took 6m0.000405007s for node "no-preload-389831" to be "Ready" ...
	I1208 01:56:59.026905 1047159 out.go:203] 
	W1208 01:56:59.029838 1047159 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:56:59.029857 1047159 out.go:285] * 
	W1208 01:56:59.032175 1047159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:56:59.035425 1047159 out.go:203] 
	I1208 01:56:56.009694 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.509523 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.010140 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.153585 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:57.218181 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.218214 1055021 retry.go:31] will retry after 3.909169329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.647096 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:57.710136 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.710169 1055021 retry.go:31] will retry after 4.894098122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.009665 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:58.439443 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:58.505497 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.505529 1055021 retry.go:31] will retry after 6.007342944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.009469 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.510388 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.015300 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.509494 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.010257 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.128215 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:01.190419 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.190453 1055021 retry.go:31] will retry after 9.504933562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.509623 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.009676 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.509462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.605116 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:02.675800 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:02.675835 1055021 retry.go:31] will retry after 6.984717516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:03.009407 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:03.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.015233 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.509531 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.514060 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:04.574188 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:04.574220 1055021 retry.go:31] will retry after 6.522846226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:05.012398 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.509759 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.010229 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.509419 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.009462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.510275 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.010363 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.010036 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.509454 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.661163 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:09.722054 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:09.722085 1055021 retry.go:31] will retry after 5.465119302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.010374 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.510222 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.696134 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:10.771084 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.771123 1055021 retry.go:31] will retry after 11.695285792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.009829 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.098157 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:11.159270 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.159302 1055021 retry.go:31] will retry after 8.417822009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.509651 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.010126 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.009464 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.510317 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.009529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.510393 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.009573 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.188355 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:15.251108 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.251147 1055021 retry.go:31] will retry after 12.201311078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.509570 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.009635 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.009802 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.510253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.509509 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.009459 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.509684 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.577986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:19.638356 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:19.638389 1055021 retry.go:31] will retry after 8.001395588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:20.012301 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.509725 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.010367 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.509456 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.009599 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.467388 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:57:22.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:22.532031 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:22.532062 1055021 retry.go:31] will retry after 11.135828112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:23.009468 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.509432 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.010095 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.510255 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.012400 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.010403 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.452716 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:27.510223 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:27.519149 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.519184 1055021 retry.go:31] will retry after 13.452567778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.640862 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:27.703487 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.703522 1055021 retry.go:31] will retry after 26.167048463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:28.009930 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:28.509594 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.009708 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.510396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.009745 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.010280 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.010087 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.509477 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.010351 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.509804 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.668898 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:33.729185 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:33.729219 1055021 retry.go:31] will retry after 25.894597219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:34.009473 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:34.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.010355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.010451 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.509505 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.009541 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.509700 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.014196 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.509592 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.010217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.510250 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.015373 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.510349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.972256 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:41.009839 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:41.066333 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.066366 1055021 retry.go:31] will retry after 34.953666856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.509748 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.009596 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.509438 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.009956 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.510378 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.009680 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.012784 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.510247 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.010335 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.509529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.009480 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.509657 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.009556 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.509689 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.009367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.009459 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.046711 1055021 cri.go:89] found id: ""
	I1208 01:57:49.046741 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.046749 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.046756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.046829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.086414 1055021 cri.go:89] found id: ""
	I1208 01:57:49.086435 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.086443 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.086449 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.086517 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:49.111234 1055021 cri.go:89] found id: ""
	I1208 01:57:49.111256 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.111264 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:49.111270 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:49.111328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:49.135868 1055021 cri.go:89] found id: ""
	I1208 01:57:49.135890 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.135899 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:49.135905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:49.135966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:49.161459 1055021 cri.go:89] found id: ""
	I1208 01:57:49.161482 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.161490 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:49.161496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:49.161557 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:49.186397 1055021 cri.go:89] found id: ""
	I1208 01:57:49.186421 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.186430 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:49.186436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:49.186542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:49.213171 1055021 cri.go:89] found id: ""
	I1208 01:57:49.213192 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.213201 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:49.213207 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:49.213265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:49.239381 1055021 cri.go:89] found id: ""
	I1208 01:57:49.239451 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.239484 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:49.239500 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:49.239512 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:49.311423 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:49.311459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:49.331846 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:49.331876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:49.396868 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:49.396933 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:49.396954 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:49.425376 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:49.425412 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:51.956807 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:51.967366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:51.967435 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:51.995332 1055021 cri.go:89] found id: ""
	I1208 01:57:51.995356 1055021 logs.go:282] 0 containers: []
	W1208 01:57:51.995364 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:51.995371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:51.995429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.032087 1055021 cri.go:89] found id: ""
	I1208 01:57:52.032112 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.032121 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.032128 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.032190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.066375 1055021 cri.go:89] found id: ""
	I1208 01:57:52.066403 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.066412 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.066420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.066490 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:52.098263 1055021 cri.go:89] found id: ""
	I1208 01:57:52.098291 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.098300 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:52.098306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:52.098376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:52.125642 1055021 cri.go:89] found id: ""
	I1208 01:57:52.125672 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.125681 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:52.125688 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:52.125750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:52.155324 1055021 cri.go:89] found id: ""
	I1208 01:57:52.155348 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.155356 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:52.155363 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:52.155424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:52.180558 1055021 cri.go:89] found id: ""
	I1208 01:57:52.180625 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.180647 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:52.180659 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:52.180742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:52.209892 1055021 cri.go:89] found id: ""
	I1208 01:57:52.209921 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.209930 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:52.209940 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:52.209951 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:52.237887 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:52.237925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:52.279083 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:52.279113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:52.360508 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:52.360547 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:52.379387 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:52.379417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:52.443498 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.871074 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:53.931966 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:53.931998 1055021 retry.go:31] will retry after 33.054913046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:54.943790 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:54.955406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:54.955477 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:54.980272 1055021 cri.go:89] found id: ""
	I1208 01:57:54.980295 1055021 logs.go:282] 0 containers: []
	W1208 01:57:54.980303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:54.980310 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:54.980377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.016873 1055021 cri.go:89] found id: ""
	I1208 01:57:55.016950 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.016973 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.016992 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.017116 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.055884 1055021 cri.go:89] found id: ""
	I1208 01:57:55.055905 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.055914 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.055920 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.055979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.085540 1055021 cri.go:89] found id: ""
	I1208 01:57:55.085561 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.085569 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.085576 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.085641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.111356 1055021 cri.go:89] found id: ""
	I1208 01:57:55.111378 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.111386 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.111393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.111473 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:55.137620 1055021 cri.go:89] found id: ""
	I1208 01:57:55.137643 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.137651 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:55.137657 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:55.137717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:55.162561 1055021 cri.go:89] found id: ""
	I1208 01:57:55.162626 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.162650 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:55.162667 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:55.162751 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:55.188593 1055021 cri.go:89] found id: ""
	I1208 01:57:55.188658 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.188683 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:55.188697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:55.188744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:55.254035 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:55.254057 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:55.254081 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:55.286453 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:55.286528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:55.320738 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:55.320762 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.387748 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:55.387783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:57.905905 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:57.918662 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:57.918736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:57.946026 1055021 cri.go:89] found id: ""
	I1208 01:57:57.946049 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.946058 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:57.946065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:57.946124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:57.971642 1055021 cri.go:89] found id: ""
	I1208 01:57:57.971669 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.971678 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:57.971685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:57.971744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.007407 1055021 cri.go:89] found id: ""
	I1208 01:57:58.007432 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.007441 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.007447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.007523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.050421 1055021 cri.go:89] found id: ""
	I1208 01:57:58.050442 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.050450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.050457 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.050518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.083694 1055021 cri.go:89] found id: ""
	I1208 01:57:58.083719 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.083728 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.083741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.083800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.110828 1055021 cri.go:89] found id: ""
	I1208 01:57:58.110874 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.110882 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.110899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.110974 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.136277 1055021 cri.go:89] found id: ""
	I1208 01:57:58.136302 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.136310 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.136317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.136378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:58.162168 1055021 cri.go:89] found id: ""
	I1208 01:57:58.162234 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.162258 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:58.162280 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:58.162304 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.191089 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:58.191121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:58.262015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:58.262058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:58.282086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:58.282121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:58.355880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:58.355910 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:58.355926 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:59.624913 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:59.684883 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:59.684920 1055021 retry.go:31] will retry after 39.668120724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:00.884752 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:00.909814 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:00.909896 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:00.936313 1055021 cri.go:89] found id: ""
	I1208 01:58:00.936344 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.936353 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:00.936360 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:00.936420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:00.966288 1055021 cri.go:89] found id: ""
	I1208 01:58:00.966355 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.966376 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:00.966394 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:00.966483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:00.992494 1055021 cri.go:89] found id: ""
	I1208 01:58:00.992526 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.992536 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:00.992543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:00.992608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.026941 1055021 cri.go:89] found id: ""
	I1208 01:58:01.026969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.026979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.026985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.027057 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.058196 1055021 cri.go:89] found id: ""
	I1208 01:58:01.058224 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.058233 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.058239 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.058301 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.086997 1055021 cri.go:89] found id: ""
	I1208 01:58:01.087025 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.087034 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.087042 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.087124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.113372 1055021 cri.go:89] found id: ""
	I1208 01:58:01.113401 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.113411 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.113417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.113480 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.140687 1055021 cri.go:89] found id: ""
	I1208 01:58:01.140717 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.140726 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.140736 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.140747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:01.211011 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:01.211061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:01.229916 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:01.229948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:01.319423 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:01.319443 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:01.319455 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:01.349176 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:01.349213 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:03.883281 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:03.894087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:03.894159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:03.919271 1055021 cri.go:89] found id: ""
	I1208 01:58:03.919294 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.919302 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:03.919309 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:03.919367 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:03.944356 1055021 cri.go:89] found id: ""
	I1208 01:58:03.944379 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.944387 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:03.944393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:03.944456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:03.969863 1055021 cri.go:89] found id: ""
	I1208 01:58:03.969890 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.969900 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:03.969907 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:03.969981 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:03.995306 1055021 cri.go:89] found id: ""
	I1208 01:58:03.995328 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.995336 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:03.995344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:03.995402 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.037050 1055021 cri.go:89] found id: ""
	I1208 01:58:04.037079 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.037089 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.037096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.037159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.081029 1055021 cri.go:89] found id: ""
	I1208 01:58:04.081057 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.081066 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.081073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.081139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.111984 1055021 cri.go:89] found id: ""
	I1208 01:58:04.112005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.112013 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.112020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.112079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.140750 1055021 cri.go:89] found id: ""
	I1208 01:58:04.140776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.140784 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.140793 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.140805 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.207146 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.207183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:04.225030 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:04.225061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:04.295674 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:04.295696 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:04.295708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:04.326962 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.327003 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:06.859119 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:06.871159 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:06.871236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:06.901570 1055021 cri.go:89] found id: ""
	I1208 01:58:06.901594 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.901603 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:06.901618 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:06.901681 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:06.930193 1055021 cri.go:89] found id: ""
	I1208 01:58:06.930220 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.930229 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:06.930235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:06.930298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:06.955159 1055021 cri.go:89] found id: ""
	I1208 01:58:06.955188 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.955197 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:06.955205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:06.955278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:06.980007 1055021 cri.go:89] found id: ""
	I1208 01:58:06.980031 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.980040 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:06.980046 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:06.980103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.017391 1055021 cri.go:89] found id: ""
	I1208 01:58:07.017417 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.017425 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.017432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.017495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.048550 1055021 cri.go:89] found id: ""
	I1208 01:58:07.048577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.048586 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.048596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.048659 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.080691 1055021 cri.go:89] found id: ""
	I1208 01:58:07.080759 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.080783 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.080796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.080874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.105849 1055021 cri.go:89] found id: ""
	I1208 01:58:07.105925 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.105948 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.105971 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:07.106012 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:07.138653 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.138732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.206905 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.206940 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.224653 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.224683 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.303888 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.303912 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:07.303925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:09.834549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:09.845152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:09.845227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:09.870225 1055021 cri.go:89] found id: ""
	I1208 01:58:09.870251 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.870259 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:09.870268 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:09.870330 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:09.896168 1055021 cri.go:89] found id: ""
	I1208 01:58:09.896191 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.896200 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:09.896206 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:09.896269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:09.922117 1055021 cri.go:89] found id: ""
	I1208 01:58:09.922140 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.922149 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:09.922155 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:09.922215 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:09.947105 1055021 cri.go:89] found id: ""
	I1208 01:58:09.947129 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.947137 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:09.947143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:09.947236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:09.972509 1055021 cri.go:89] found id: ""
	I1208 01:58:09.972535 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.972544 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:09.972551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:09.972609 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.009065 1055021 cri.go:89] found id: ""
	I1208 01:58:10.009097 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.009107 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.009115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.009196 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.052170 1055021 cri.go:89] found id: ""
	I1208 01:58:10.052197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.052206 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.052212 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.052278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.078447 1055021 cri.go:89] found id: ""
	I1208 01:58:10.078472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.078480 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.078489 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:10.078500 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:10.109259 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.109300 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.138226 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.138251 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.204388 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.204424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.222357 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.222398 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.305027 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:12.805305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:12.815949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:12.816024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:12.840507 1055021 cri.go:89] found id: ""
	I1208 01:58:12.840531 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.840540 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:12.840546 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:12.840614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:12.865555 1055021 cri.go:89] found id: ""
	I1208 01:58:12.865580 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.865589 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:12.865595 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:12.865653 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:12.890286 1055021 cri.go:89] found id: ""
	I1208 01:58:12.890311 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.890319 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:12.890325 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:12.890383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:12.915193 1055021 cri.go:89] found id: ""
	I1208 01:58:12.915217 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.915226 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:12.915233 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:12.915291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:12.940889 1055021 cri.go:89] found id: ""
	I1208 01:58:12.940915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.940923 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:12.940931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:12.941011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:12.967233 1055021 cri.go:89] found id: ""
	I1208 01:58:12.967259 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.967268 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:12.967275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:12.967337 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:12.990975 1055021 cri.go:89] found id: ""
	I1208 01:58:12.991001 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.991009 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:12.991016 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:12.991088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.025590 1055021 cri.go:89] found id: ""
	I1208 01:58:13.025616 1055021 logs.go:282] 0 containers: []
	W1208 01:58:13.025625 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.025634 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.025646 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.063362 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.063391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.134922 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.134959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.153025 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.153060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.215226 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.215246 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:13.215258 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:15.744740 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:15.755312 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:15.755383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:15.780891 1055021 cri.go:89] found id: ""
	I1208 01:58:15.780915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.780923 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:15.780930 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:15.780989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:15.806161 1055021 cri.go:89] found id: ""
	I1208 01:58:15.806185 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.806194 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:15.806200 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:15.806257 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:15.831178 1055021 cri.go:89] found id: ""
	I1208 01:58:15.831197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.831205 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:15.831211 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:15.831269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:15.856130 1055021 cri.go:89] found id: ""
	I1208 01:58:15.856155 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.856164 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:15.856171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:15.856232 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:15.885064 1055021 cri.go:89] found id: ""
	I1208 01:58:15.885136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.885159 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:15.885177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:15.885270 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:15.912595 1055021 cri.go:89] found id: ""
	I1208 01:58:15.912623 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.912631 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:15.912638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:15.912700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:15.936650 1055021 cri.go:89] found id: ""
	I1208 01:58:15.936677 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.936686 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:15.936692 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:15.936752 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:15.962329 1055021 cri.go:89] found id: ""
	I1208 01:58:15.962350 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.962358 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:15.962367 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:15.962378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 01:58:16.020986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:16.067660 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.067744 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:16.067772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1208 01:58:16.112099 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.112132 1055021 retry.go:31] will retry after 29.72360839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.126560 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.126615 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.157854 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.157883 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.223999 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.224035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:18.742355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:18.752998 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:18.753077 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:18.778077 1055021 cri.go:89] found id: ""
	I1208 01:58:18.778099 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.778107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:18.778114 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:18.778171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:18.802643 1055021 cri.go:89] found id: ""
	I1208 01:58:18.802665 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.802673 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:18.802679 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:18.802736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:18.827413 1055021 cri.go:89] found id: ""
	I1208 01:58:18.827441 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.827450 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:18.827456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:18.827514 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:18.852593 1055021 cri.go:89] found id: ""
	I1208 01:58:18.852618 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.852627 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:18.852634 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:18.852694 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:18.877850 1055021 cri.go:89] found id: ""
	I1208 01:58:18.877876 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.877884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:18.877891 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:18.877949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:18.906907 1055021 cri.go:89] found id: ""
	I1208 01:58:18.906930 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.906938 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:18.906945 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:18.907007 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:18.932699 1055021 cri.go:89] found id: ""
	I1208 01:58:18.932723 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.932733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:18.932739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:18.932802 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:18.958426 1055021 cri.go:89] found id: ""
	I1208 01:58:18.958448 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.958456 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:18.958465 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:18.958476 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.023824 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.023904 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.043811 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.043946 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.116236 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.116259 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:19.116273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:19.145950 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.145986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:21.678015 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:21.689017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:21.689107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:21.714453 1055021 cri.go:89] found id: ""
	I1208 01:58:21.714513 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.714522 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:21.714529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:21.714590 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:21.738662 1055021 cri.go:89] found id: ""
	I1208 01:58:21.738688 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.738697 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:21.738703 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:21.738765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:21.763648 1055021 cri.go:89] found id: ""
	I1208 01:58:21.763684 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.763693 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:21.763700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:21.763768 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:21.789120 1055021 cri.go:89] found id: ""
	I1208 01:58:21.789142 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.789150 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:21.789156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:21.789212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:21.814445 1055021 cri.go:89] found id: ""
	I1208 01:58:21.814466 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.814474 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:21.814480 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:21.814538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:21.843027 1055021 cri.go:89] found id: ""
	I1208 01:58:21.843061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.843070 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:21.843078 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:21.843139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:21.872604 1055021 cri.go:89] found id: ""
	I1208 01:58:21.872632 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.872640 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:21.872647 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:21.872725 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:21.898190 1055021 cri.go:89] found id: ""
	I1208 01:58:21.898225 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.898233 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:21.898258 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:21.898274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:21.963735 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:21.963774 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:21.981549 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:21.981580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.065337 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.065359 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:22.065373 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:22.096383 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.096419 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:24.626630 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:24.637406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:24.637484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:24.662982 1055021 cri.go:89] found id: ""
	I1208 01:58:24.663005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.663014 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:24.663020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:24.663088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:24.687863 1055021 cri.go:89] found id: ""
	I1208 01:58:24.687887 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.687897 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:24.687904 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:24.687965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:24.713087 1055021 cri.go:89] found id: ""
	I1208 01:58:24.713110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.713119 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:24.713125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:24.713185 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:24.738346 1055021 cri.go:89] found id: ""
	I1208 01:58:24.738369 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.738378 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:24.738385 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:24.738451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:24.764281 1055021 cri.go:89] found id: ""
	I1208 01:58:24.764309 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.764317 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:24.764323 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:24.764382 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:24.788244 1055021 cri.go:89] found id: ""
	I1208 01:58:24.788267 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.788276 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:24.788282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:24.788358 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:24.812521 1055021 cri.go:89] found id: ""
	I1208 01:58:24.812544 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.812553 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:24.812559 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:24.812620 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:24.837747 1055021 cri.go:89] found id: ""
	I1208 01:58:24.837772 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.837781 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:24.837790 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:24.837804 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:24.903152 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:24.903189 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:24.920792 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:24.920824 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:24.987709 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:24.987780 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:24.987806 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:25.019693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.019773 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:26.987306 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:58:27.057603 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:27.057721 1055021 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:27.560847 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:27.570936 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:27.571004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:27.595473 1055021 cri.go:89] found id: ""
	I1208 01:58:27.595497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.595505 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:27.595512 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:27.595577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:27.620674 1055021 cri.go:89] found id: ""
	I1208 01:58:27.620696 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.620704 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:27.620710 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:27.620766 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:27.646168 1055021 cri.go:89] found id: ""
	I1208 01:58:27.646192 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.646202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:27.646208 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:27.646283 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:27.671472 1055021 cri.go:89] found id: ""
	I1208 01:58:27.671549 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.671564 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:27.671572 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:27.671632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:27.699385 1055021 cri.go:89] found id: ""
	I1208 01:58:27.699409 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.699417 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:27.699423 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:27.699492 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:27.726912 1055021 cri.go:89] found id: ""
	I1208 01:58:27.726937 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.726946 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:27.726953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:27.727011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:27.752037 1055021 cri.go:89] found id: ""
	I1208 01:58:27.752061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.752070 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:27.752076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:27.752139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:27.777018 1055021 cri.go:89] found id: ""
	I1208 01:58:27.777081 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.777097 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:27.777106 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:27.777119 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:27.845091 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:27.845115 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:27.845129 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:27.873750 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:27.873794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:27.906540 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:27.906569 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:27.986314 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:27.986360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.504860 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:30.520332 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:30.520426 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:30.558545 1055021 cri.go:89] found id: ""
	I1208 01:58:30.558574 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.558589 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:30.558596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:30.558670 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:30.587958 1055021 cri.go:89] found id: ""
	I1208 01:58:30.587979 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.587988 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:30.587994 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:30.588055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:30.613947 1055021 cri.go:89] found id: ""
	I1208 01:58:30.613969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.613977 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:30.613983 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:30.614048 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:30.639872 1055021 cri.go:89] found id: ""
	I1208 01:58:30.639899 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.639908 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:30.639916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:30.639975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:30.664766 1055021 cri.go:89] found id: ""
	I1208 01:58:30.664789 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.664797 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:30.664804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:30.664862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:30.694045 1055021 cri.go:89] found id: ""
	I1208 01:58:30.694110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.694130 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:30.694149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:30.694238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:30.719821 1055021 cri.go:89] found id: ""
	I1208 01:58:30.719843 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.719851 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:30.719857 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:30.719915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:30.745151 1055021 cri.go:89] found id: ""
	I1208 01:58:30.745176 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.745185 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:30.745194 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:30.745206 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:30.808884 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:30.808918 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.826624 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:30.826650 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:30.895279 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:30.895304 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:30.895317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:30.927429 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:30.927478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:33.458304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:33.468970 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:33.469040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:33.493566 1055021 cri.go:89] found id: ""
	I1208 01:58:33.493592 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.493601 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:33.493608 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:33.493669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:33.526608 1055021 cri.go:89] found id: ""
	I1208 01:58:33.526630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.526638 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:33.526644 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:33.526705 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:33.560265 1055021 cri.go:89] found id: ""
	I1208 01:58:33.560287 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.560295 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:33.560301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:33.560376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:33.588803 1055021 cri.go:89] found id: ""
	I1208 01:58:33.588830 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.588839 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:33.588846 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:33.588908 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:33.614585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.614610 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.614619 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:33.614625 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:33.614684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:33.638894 1055021 cri.go:89] found id: ""
	I1208 01:58:33.638917 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.638926 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:33.638933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:33.638991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:33.664714 1055021 cri.go:89] found id: ""
	I1208 01:58:33.664736 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.664744 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:33.664752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:33.664814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:33.689585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.689611 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.689620 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:33.689629 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:33.689641 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:33.753906 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:33.753942 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:33.771754 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:33.771783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:33.841023 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:33.841047 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:33.841060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:33.868853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:33.868891 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.397728 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.410372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.410443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:36.441015 1055021 cri.go:89] found id: ""
	I1208 01:58:36.441041 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.441049 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:36.441055 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:36.441117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:36.466353 1055021 cri.go:89] found id: ""
	I1208 01:58:36.466386 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.466395 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:36.466401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:36.466463 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:36.491643 1055021 cri.go:89] found id: ""
	I1208 01:58:36.491670 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.491679 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:36.491685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:36.491743 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:36.531444 1055021 cri.go:89] found id: ""
	I1208 01:58:36.531472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.531480 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:36.531487 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:36.531551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:36.561863 1055021 cri.go:89] found id: ""
	I1208 01:58:36.561891 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.561900 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:36.561906 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:36.561965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:36.598817 1055021 cri.go:89] found id: ""
	I1208 01:58:36.598868 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.598877 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:36.598884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:36.598953 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:36.625352 1055021 cri.go:89] found id: ""
	I1208 01:58:36.625392 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.625402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:36.625408 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:36.625478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:36.649929 1055021 cri.go:89] found id: ""
	I1208 01:58:36.649961 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.649969 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:36.649979 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:36.649991 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:36.717242 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:36.717272 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:36.717284 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:36.745340 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:36.745375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.772396 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:36.772423 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:36.840336 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:36.840375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.353819 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:58:39.359310 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:58:39.415165 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:39.415265 1055021 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:39.415318 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.415380 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.440780 1055021 cri.go:89] found id: ""
	I1208 01:58:39.440802 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.440817 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.440824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.440883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:39.469267 1055021 cri.go:89] found id: ""
	I1208 01:58:39.469293 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.469302 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:39.469308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:39.469369 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:39.497131 1055021 cri.go:89] found id: ""
	I1208 01:58:39.497154 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.497162 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:39.497171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:39.497229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:39.533641 1055021 cri.go:89] found id: ""
	I1208 01:58:39.533666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.533675 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:39.533683 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:39.533741 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:39.569861 1055021 cri.go:89] found id: ""
	I1208 01:58:39.569884 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.569893 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:39.569900 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:39.569959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:39.598670 1055021 cri.go:89] found id: ""
	I1208 01:58:39.598694 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.598702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:39.598709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:39.598770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:39.623360 1055021 cri.go:89] found id: ""
	I1208 01:58:39.623384 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.623392 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:39.623398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:39.623464 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:39.647840 1055021 cri.go:89] found id: ""
	I1208 01:58:39.647864 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.647873 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:39.647881 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:39.647893 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:39.711466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:39.711505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.728921 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:39.728950 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:39.792077 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:39.792097 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:39.792111 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:39.819026 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:39.819064 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.348228 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.359751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.359835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.385781 1055021 cri.go:89] found id: ""
	I1208 01:58:42.385808 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.385818 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.385824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.385884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:42.412513 1055021 cri.go:89] found id: ""
	I1208 01:58:42.412540 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.412555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:42.412562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:42.412621 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:42.439136 1055021 cri.go:89] found id: ""
	I1208 01:58:42.439202 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.439217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:42.439223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:42.439297 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:42.468994 1055021 cri.go:89] found id: ""
	I1208 01:58:42.469069 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.469092 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:42.469105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:42.469190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:42.493446 1055021 cri.go:89] found id: ""
	I1208 01:58:42.493481 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.493489 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:42.493496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:42.493573 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:42.535705 1055021 cri.go:89] found id: ""
	I1208 01:58:42.535751 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.535760 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:42.535768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:42.535838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:42.565148 1055021 cri.go:89] found id: ""
	I1208 01:58:42.565174 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.565183 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:42.565189 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:42.565262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:42.592944 1055021 cri.go:89] found id: ""
	I1208 01:58:42.592967 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.592975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:42.592984 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:42.592995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.627360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:42.627389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:42.692577 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:42.692611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:42.710349 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:42.710378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:42.782051 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.782073 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:42.782085 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.310746 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.328999 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.329226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.355526 1055021 cri.go:89] found id: ""
	I1208 01:58:45.355554 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.355562 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.355569 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.355649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.385050 1055021 cri.go:89] found id: ""
	I1208 01:58:45.385073 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.385081 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.385087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.385146 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.409413 1055021 cri.go:89] found id: ""
	I1208 01:58:45.409438 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.409447 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.409452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.409510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:45.445870 1055021 cri.go:89] found id: ""
	I1208 01:58:45.445903 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.445912 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:45.445919 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:45.445988 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:45.473347 1055021 cri.go:89] found id: ""
	I1208 01:58:45.473382 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.473391 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:45.473397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:45.473465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:45.497721 1055021 cri.go:89] found id: ""
	I1208 01:58:45.497756 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.497765 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:45.497772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:45.497839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:45.529708 1055021 cri.go:89] found id: ""
	I1208 01:58:45.529739 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.529748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:45.529754 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:45.529829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:45.556748 1055021 cri.go:89] found id: ""
	I1208 01:58:45.556783 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.556792 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:45.556801 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:45.556812 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:45.623617 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:45.623652 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:45.642117 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:45.642151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:45.711093 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:45.711114 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:45.711127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.739133 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:45.739169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.836195 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:45.896793 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:45.896954 1055021 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:45.900444 1055021 out.go:179] * Enabled addons: 
	I1208 01:58:45.903391 1055021 addons.go:530] duration metric: took 1m57.256950319s for enable addons: enabled=[]
	I1208 01:58:48.271013 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.282344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.282467 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.314973 1055021 cri.go:89] found id: ""
	I1208 01:58:48.315046 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.315078 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.315098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.315204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.344987 1055021 cri.go:89] found id: ""
	I1208 01:58:48.345017 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.345026 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.345033 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.345094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.370650 1055021 cri.go:89] found id: ""
	I1208 01:58:48.370674 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.370681 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.370687 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.370749 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.396253 1055021 cri.go:89] found id: ""
	I1208 01:58:48.396319 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.396334 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.396341 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.396410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.425208 1055021 cri.go:89] found id: ""
	I1208 01:58:48.425235 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.425244 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.425250 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.425312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:48.455125 1055021 cri.go:89] found id: ""
	I1208 01:58:48.455150 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.455160 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:48.455177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:48.455238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:48.479964 1055021 cri.go:89] found id: ""
	I1208 01:58:48.480043 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.480059 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:48.480067 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:48.480128 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:48.506875 1055021 cri.go:89] found id: ""
	I1208 01:58:48.506902 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.506911 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:48.506920 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:48.506933 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:48.581685 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:48.581724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:48.600281 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:48.600313 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:48.663184 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:48.663203 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:48.663217 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:48.691509 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:48.691549 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.221462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.231909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.231985 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.262905 1055021 cri.go:89] found id: ""
	I1208 01:58:51.262932 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.262940 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.262946 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.263006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.293540 1055021 cri.go:89] found id: ""
	I1208 01:58:51.293567 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.293576 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.293582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.293639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.324201 1055021 cri.go:89] found id: ""
	I1208 01:58:51.324228 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.324236 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.324242 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.324298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.351933 1055021 cri.go:89] found id: ""
	I1208 01:58:51.351960 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.351974 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.351981 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.352040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.376814 1055021 cri.go:89] found id: ""
	I1208 01:58:51.376836 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.376845 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.376851 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.376909 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.401752 1055021 cri.go:89] found id: ""
	I1208 01:58:51.401776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.401785 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.401791 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.401848 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.430825 1055021 cri.go:89] found id: ""
	I1208 01:58:51.430861 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.430870 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.430876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.430938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.455641 1055021 cri.go:89] found id: ""
	I1208 01:58:51.455666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.455674 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.455684 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.455695 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:51.527696 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:51.527719 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:51.527732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:51.557037 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:51.557072 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.589759 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:51.589789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:51.655851 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.655888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:54.174903 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.185290 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.185363 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.213134 1055021 cri.go:89] found id: ""
	I1208 01:58:54.213158 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.213167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.213174 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.213234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.238420 1055021 cri.go:89] found id: ""
	I1208 01:58:54.238446 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.238455 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.238461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.238524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.272304 1055021 cri.go:89] found id: ""
	I1208 01:58:54.272331 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.272339 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.272345 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.272405 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.302582 1055021 cri.go:89] found id: ""
	I1208 01:58:54.302608 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.302617 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.302623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.302683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.331550 1055021 cri.go:89] found id: ""
	I1208 01:58:54.331577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.331585 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.331591 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.331656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.356262 1055021 cri.go:89] found id: ""
	I1208 01:58:54.356285 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.356293 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.356300 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.356364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.382019 1055021 cri.go:89] found id: ""
	I1208 01:58:54.382045 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.382054 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.382060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.382120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.407111 1055021 cri.go:89] found id: ""
	I1208 01:58:54.407136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.407145 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.407154 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.407169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.470487 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.470509 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:54.470522 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:54.498660 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:54.498697 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:54.539432 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:54.539462 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.617690 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:54.617725 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.135616 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.145801 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.145871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.170603 1055021 cri.go:89] found id: ""
	I1208 01:58:57.170629 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.170637 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.170643 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.170701 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.197272 1055021 cri.go:89] found id: ""
	I1208 01:58:57.197300 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.197309 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.197315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.197379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.226393 1055021 cri.go:89] found id: ""
	I1208 01:58:57.226420 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.226430 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.226436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.226499 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.267139 1055021 cri.go:89] found id: ""
	I1208 01:58:57.267215 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.267239 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.267257 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.267350 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.302475 1055021 cri.go:89] found id: ""
	I1208 01:58:57.302497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.302505 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.302511 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.302571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.335859 1055021 cri.go:89] found id: ""
	I1208 01:58:57.335886 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.335894 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.335901 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.335959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.360608 1055021 cri.go:89] found id: ""
	I1208 01:58:57.360630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.360639 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.360646 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.360706 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.386045 1055021 cri.go:89] found id: ""
	I1208 01:58:57.386067 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.386076 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.386084 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.386096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:57.454478 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.454515 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.472469 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.472503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.545965 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.545998 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:57.546011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:57.584922 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.584959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:00.114637 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.175958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.176042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.249754 1055021 cri.go:89] found id: ""
	I1208 01:59:00.249778 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.249788 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.249795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.249868 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.304452 1055021 cri.go:89] found id: ""
	I1208 01:59:00.304487 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.304497 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.304503 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.304576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.346364 1055021 cri.go:89] found id: ""
	I1208 01:59:00.346424 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.346434 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.346465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.346577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.377822 1055021 cri.go:89] found id: ""
	I1208 01:59:00.377852 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.377862 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.377868 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.377963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.406823 1055021 cri.go:89] found id: ""
	I1208 01:59:00.406875 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.406884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.406908 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.406992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.435875 1055021 cri.go:89] found id: ""
	I1208 01:59:00.435911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.435920 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.435942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.436025 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.463084 1055021 cri.go:89] found id: ""
	I1208 01:59:00.463117 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.463126 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.463135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.463243 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.489555 1055021 cri.go:89] found id: ""
	I1208 01:59:00.489589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.489598 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.489626 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.489645 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.562522 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.562560 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.582358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.582389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.649877 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.649899 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:00.649912 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:00.682085 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.682120 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.216065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.226430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.226503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.253068 1055021 cri.go:89] found id: ""
	I1208 01:59:03.253093 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.253102 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.253109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.253168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.282867 1055021 cri.go:89] found id: ""
	I1208 01:59:03.282894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.282903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.282910 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.282969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.320054 1055021 cri.go:89] found id: ""
	I1208 01:59:03.320080 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.320092 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.320098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.320155 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.347220 1055021 cri.go:89] found id: ""
	I1208 01:59:03.347243 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.347252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.347258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.347319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.373498 1055021 cri.go:89] found id: ""
	I1208 01:59:03.373570 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.373595 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.373613 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.373703 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.399912 1055021 cri.go:89] found id: ""
	I1208 01:59:03.399948 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.399957 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.399964 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.400023 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.425601 1055021 cri.go:89] found id: ""
	I1208 01:59:03.425625 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.425634 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.425640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.425698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.454732 1055021 cri.go:89] found id: ""
	I1208 01:59:03.454758 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.454767 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.454775 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.454789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.530461 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.530493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.549828 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.549917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.620701 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.620720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:03.620735 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:03.649018 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.649058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.177524 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.187461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.187531 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.214977 1055021 cri.go:89] found id: ""
	I1208 01:59:06.214999 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.215008 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.215015 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.215094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.238383 1055021 cri.go:89] found id: ""
	I1208 01:59:06.238493 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.238514 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.238534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.238619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.272265 1055021 cri.go:89] found id: ""
	I1208 01:59:06.272329 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.272351 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.272367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.272453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.302615 1055021 cri.go:89] found id: ""
	I1208 01:59:06.302658 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.302672 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.302678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.302750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.331427 1055021 cri.go:89] found id: ""
	I1208 01:59:06.331491 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.331512 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.331534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.331619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.356630 1055021 cri.go:89] found id: ""
	I1208 01:59:06.356711 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.356726 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.356734 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.356792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.382232 1055021 cri.go:89] found id: ""
	I1208 01:59:06.382265 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.382273 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.382279 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.382345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.409564 1055021 cri.go:89] found id: ""
	I1208 01:59:06.409598 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.409607 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.409616 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.409629 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.474483 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.474521 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:06.492236 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.492265 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.581040 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.581061 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:06.581074 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:06.609481 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.609528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:09.142358 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.152558 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.152645 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.176404 1055021 cri.go:89] found id: ""
	I1208 01:59:09.176469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.176483 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.176494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.176555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.200664 1055021 cri.go:89] found id: ""
	I1208 01:59:09.200687 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.200696 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.200702 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.200759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.227242 1055021 cri.go:89] found id: ""
	I1208 01:59:09.227266 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.227274 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.227280 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.227339 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.251746 1055021 cri.go:89] found id: ""
	I1208 01:59:09.251777 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.251786 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.251792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.251859 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.285331 1055021 cri.go:89] found id: ""
	I1208 01:59:09.285356 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.285365 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.285371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.285438 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.323377 1055021 cri.go:89] found id: ""
	I1208 01:59:09.323403 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.323411 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.323418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.323479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.348974 1055021 cri.go:89] found id: ""
	I1208 01:59:09.349042 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.349058 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.349065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.349127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.378922 1055021 cri.go:89] found id: ""
	I1208 01:59:09.378954 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.378962 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.378972 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.378983 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.444646 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.444685 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.462014 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.462050 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.537469 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.537502 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:09.537514 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:09.568427 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.568465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.103793 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.114409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.114485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.143200 1055021 cri.go:89] found id: ""
	I1208 01:59:12.143235 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.143245 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.143251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.143323 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.171946 1055021 cri.go:89] found id: ""
	I1208 01:59:12.171971 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.171979 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.171985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.172050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.196625 1055021 cri.go:89] found id: ""
	I1208 01:59:12.196651 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.196661 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.196669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.196775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.223108 1055021 cri.go:89] found id: ""
	I1208 01:59:12.223178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.223203 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.223221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.223315 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.253115 1055021 cri.go:89] found id: ""
	I1208 01:59:12.253141 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.253155 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.253173 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.253271 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.293405 1055021 cri.go:89] found id: ""
	I1208 01:59:12.293429 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.293438 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.293444 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.293512 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.323970 1055021 cri.go:89] found id: ""
	I1208 01:59:12.324002 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.324011 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.324017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.324087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.355979 1055021 cri.go:89] found id: ""
	I1208 01:59:12.356005 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.356013 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.356023 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.356035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.421458 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.421496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.440234 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.440269 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.509186 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.509214 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:12.509226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:12.541753 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.541790 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.078928 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.091792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.091882 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.118461 1055021 cri.go:89] found id: ""
	I1208 01:59:15.118482 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.118490 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.118496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.118561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.143588 1055021 cri.go:89] found id: ""
	I1208 01:59:15.143612 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.143621 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.143627 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.143687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.174121 1055021 cri.go:89] found id: ""
	I1208 01:59:15.174149 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.174158 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.174164 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.174281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.202466 1055021 cri.go:89] found id: ""
	I1208 01:59:15.202489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.202498 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.202504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.202563 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.229640 1055021 cri.go:89] found id: ""
	I1208 01:59:15.229663 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.229672 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.229678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.229737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.259982 1055021 cri.go:89] found id: ""
	I1208 01:59:15.260013 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.260021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.260027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.260085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.299510 1055021 cri.go:89] found id: ""
	I1208 01:59:15.299535 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.299544 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.299551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.299639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.327621 1055021 cri.go:89] found id: ""
	I1208 01:59:15.327655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.327664 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.327673 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.327684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:15.394588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.394632 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.412251 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.412283 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.478739 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.478760 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:15.478772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:15.507201 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.507279 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.049265 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.060577 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.060652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.087023 1055021 cri.go:89] found id: ""
	I1208 01:59:18.087050 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.087066 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.087073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.087132 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.115800 1055021 cri.go:89] found id: ""
	I1208 01:59:18.115826 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.115835 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.115841 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.115901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.145764 1055021 cri.go:89] found id: ""
	I1208 01:59:18.145787 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.145797 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.145803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.145862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.174947 1055021 cri.go:89] found id: ""
	I1208 01:59:18.174974 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.174983 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.174990 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.175050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.200824 1055021 cri.go:89] found id: ""
	I1208 01:59:18.200847 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.200857 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.200863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.200935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.229145 1055021 cri.go:89] found id: ""
	I1208 01:59:18.229168 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.229176 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.229185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.229246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.266059 1055021 cri.go:89] found id: ""
	I1208 01:59:18.266083 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.266092 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.266098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.266159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.293538 1055021 cri.go:89] found id: ""
	I1208 01:59:18.293605 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.293630 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.293657 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.293682 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.366543 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.366580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.387334 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.387367 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.457441 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:18.457480 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:18.457496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:18.486126 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.486159 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:21.020889 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.031877 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.031948 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.061454 1055021 cri.go:89] found id: ""
	I1208 01:59:21.061480 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.061489 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.061496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.061561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.086273 1055021 cri.go:89] found id: ""
	I1208 01:59:21.086300 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.086308 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.086315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.086373 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.112614 1055021 cri.go:89] found id: ""
	I1208 01:59:21.112637 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.112646 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.112652 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.112710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.142489 1055021 cri.go:89] found id: ""
	I1208 01:59:21.142511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.142521 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.142527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.142584 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.167579 1055021 cri.go:89] found id: ""
	I1208 01:59:21.167602 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.167618 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.167624 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.167683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.192114 1055021 cri.go:89] found id: ""
	I1208 01:59:21.192178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.192194 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.192202 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.192266 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.216638 1055021 cri.go:89] found id: ""
	I1208 01:59:21.216660 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.216669 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.216681 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.216739 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.241924 1055021 cri.go:89] found id: ""
	I1208 01:59:21.241956 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.241965 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.241989 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.242005 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.320443 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.320516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.339967 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.340098 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.405503 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.405526 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:21.405540 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:21.433479 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.433513 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:23.960720 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:23.971271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:23.971346 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:23.996003 1055021 cri.go:89] found id: ""
	I1208 01:59:23.996028 1055021 logs.go:282] 0 containers: []
	W1208 01:59:23.996037 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:23.996044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:23.996111 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.024119 1055021 cri.go:89] found id: ""
	I1208 01:59:24.024146 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.024154 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.024160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.024239 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.051095 1055021 cri.go:89] found id: ""
	I1208 01:59:24.051179 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.051202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.051217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.051298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.076451 1055021 cri.go:89] found id: ""
	I1208 01:59:24.076477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.076486 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.076493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.076577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.105499 1055021 cri.go:89] found id: ""
	I1208 01:59:24.105527 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.105537 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.105543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.105656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.136713 1055021 cri.go:89] found id: ""
	I1208 01:59:24.136736 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.136744 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.136751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.136836 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.165410 1055021 cri.go:89] found id: ""
	I1208 01:59:24.165442 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.165453 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.165460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.165541 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.194981 1055021 cri.go:89] found id: ""
	I1208 01:59:24.195018 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.195028 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.195037 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.195049 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.260506 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.260541 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.281317 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.281351 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.350532 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.350562 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:24.350574 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:24.378730 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.378760 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.906964 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.918049 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.918151 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:26.944808 1055021 cri.go:89] found id: ""
	I1208 01:59:26.944832 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.944840 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:26.944863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:26.944936 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:26.969519 1055021 cri.go:89] found id: ""
	I1208 01:59:26.969552 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.969561 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:26.969583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:26.969664 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:26.997687 1055021 cri.go:89] found id: ""
	I1208 01:59:26.997721 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.997730 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:26.997736 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:26.997835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.029005 1055021 cri.go:89] found id: ""
	I1208 01:59:27.029029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.029037 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.029044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.029121 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.052964 1055021 cri.go:89] found id: ""
	I1208 01:59:27.052989 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.053006 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.053027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.053114 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.081309 1055021 cri.go:89] found id: ""
	I1208 01:59:27.081342 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.081352 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.081375 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.081454 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.105197 1055021 cri.go:89] found id: ""
	I1208 01:59:27.105230 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.105239 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.105245 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.105311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.129963 1055021 cri.go:89] found id: ""
	I1208 01:59:27.129994 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.130003 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.130012 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:27.130023 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:27.157821 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.157853 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:27.187177 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.187201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.257425 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.257459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.284073 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.284112 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.365290 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:29.866080 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.876623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.876700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.905223 1055021 cri.go:89] found id: ""
	I1208 01:59:29.905247 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.905257 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.905264 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.905328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:29.935886 1055021 cri.go:89] found id: ""
	I1208 01:59:29.935911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.935920 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:29.935928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:29.935989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:29.961459 1055021 cri.go:89] found id: ""
	I1208 01:59:29.961489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.961499 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:29.961521 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:29.961588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:29.989601 1055021 cri.go:89] found id: ""
	I1208 01:59:29.989666 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.989691 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:29.989709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:29.989794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.034678 1055021 cri.go:89] found id: ""
	I1208 01:59:30.034757 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.034783 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.034802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.034922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.068355 1055021 cri.go:89] found id: ""
	I1208 01:59:30.068380 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.068388 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.068395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.068456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.095676 1055021 cri.go:89] found id: ""
	I1208 01:59:30.095706 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.095717 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.095723 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.095801 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.122432 1055021 cri.go:89] found id: ""
	I1208 01:59:30.122469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.122479 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.122504 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.122543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.191149 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:30.191170 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:30.191183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:30.220413 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.220447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.258205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.258234 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.330424 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.330461 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:32.850065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.861143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.861227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.885421 1055021 cri.go:89] found id: ""
	I1208 01:59:32.885447 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.885457 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.885463 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.885524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.911689 1055021 cri.go:89] found id: ""
	I1208 01:59:32.911716 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.911726 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.911732 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.911794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:32.941141 1055021 cri.go:89] found id: ""
	I1208 01:59:32.941166 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.941175 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:32.941182 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:32.941244 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:32.970750 1055021 cri.go:89] found id: ""
	I1208 01:59:32.970771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.970779 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:32.970786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:32.970883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:32.996768 1055021 cri.go:89] found id: ""
	I1208 01:59:32.996797 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.996806 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:32.996812 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:32.996887 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.025374 1055021 cri.go:89] found id: ""
	I1208 01:59:33.025410 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.025419 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.025448 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.025547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.051845 1055021 cri.go:89] found id: ""
	I1208 01:59:33.051878 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.051888 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.051895 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.051969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.078543 1055021 cri.go:89] found id: ""
	I1208 01:59:33.078566 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.078575 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.078584 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.078597 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.096489 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.096518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.168941 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.168962 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:33.168977 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:33.197574 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.197616 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:33.226563 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.226590 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:35.798966 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.810253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:35.810325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:35.835492 1055021 cri.go:89] found id: ""
	I1208 01:59:35.835516 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.835525 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:35.835534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:35.835593 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:35.861797 1055021 cri.go:89] found id: ""
	I1208 01:59:35.861823 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.861833 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:35.861839 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:35.861901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:35.887036 1055021 cri.go:89] found id: ""
	I1208 01:59:35.887073 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.887083 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:35.887090 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:35.887159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:35.915379 1055021 cri.go:89] found id: ""
	I1208 01:59:35.915456 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.915478 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:35.915493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:35.915566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:35.940687 1055021 cri.go:89] found id: ""
	I1208 01:59:35.940714 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.940724 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:35.940730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:35.940839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:35.967960 1055021 cri.go:89] found id: ""
	I1208 01:59:35.968038 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.968060 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:35.968074 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:35.968147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:35.993884 1055021 cri.go:89] found id: ""
	I1208 01:59:35.993927 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.993936 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:35.993942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:35.994012 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:36.027031 1055021 cri.go:89] found id: ""
	I1208 01:59:36.027056 1055021 logs.go:282] 0 containers: []
	W1208 01:59:36.027074 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:36.027084 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:36.027097 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:36.092294 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:36.092315 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:36.092330 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:36.120891 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:36.120927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:36.148475 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:36.148507 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:36.216306 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:36.216344 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:38.734253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:38.744803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:38.744884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:38.777276 1055021 cri.go:89] found id: ""
	I1208 01:59:38.777305 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.777314 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:38.777320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:38.777379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:38.815858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.815894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.815903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:38.815909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:38.815979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:38.845051 1055021 cri.go:89] found id: ""
	I1208 01:59:38.845084 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.845093 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:38.845098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:38.845164 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:38.870145 1055021 cri.go:89] found id: ""
	I1208 01:59:38.870178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.870187 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:38.870193 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:38.870261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:38.897461 1055021 cri.go:89] found id: ""
	I1208 01:59:38.897489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.897498 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:38.897505 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:38.897564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:38.923327 1055021 cri.go:89] found id: ""
	I1208 01:59:38.923351 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.923360 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:38.923367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:38.923430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:38.949858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.949884 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.949893 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:38.949899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:38.949963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:38.975805 1055021 cri.go:89] found id: ""
	I1208 01:59:38.975831 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.975840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:38.975849 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:38.975861 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:39.040102 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:39.040140 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:39.057980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:39.058045 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:39.129261 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:39.129281 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:39.129297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:39.157488 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:39.157524 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:41.687952 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:41.698803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:41.698906 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:41.724062 1055021 cri.go:89] found id: ""
	I1208 01:59:41.724139 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.724171 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:41.724184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:41.724260 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:41.756674 1055021 cri.go:89] found id: ""
	I1208 01:59:41.756712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.756720 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:41.756727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:41.756797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:41.793181 1055021 cri.go:89] found id: ""
	I1208 01:59:41.793208 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.793217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:41.793223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:41.793289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:41.823566 1055021 cri.go:89] found id: ""
	I1208 01:59:41.823589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.823597 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:41.823603 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:41.823660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:41.848188 1055021 cri.go:89] found id: ""
	I1208 01:59:41.848215 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.848224 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:41.848231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:41.848289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:41.874016 1055021 cri.go:89] found id: ""
	I1208 01:59:41.874053 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.874062 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:41.874068 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:41.874144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:41.901494 1055021 cri.go:89] found id: ""
	I1208 01:59:41.901517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.901525 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:41.901531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:41.901588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:41.927897 1055021 cri.go:89] found id: ""
	I1208 01:59:41.927919 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.927928 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:41.927936 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:41.927948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:41.989449 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:41.989523 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:41.989543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:42.035690 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:42.035724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:42.065962 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:42.066011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:42.136350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:42.136460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.657754 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:44.669949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:44.670036 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:44.700311 1055021 cri.go:89] found id: ""
	I1208 01:59:44.700341 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.700352 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:44.700358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:44.700422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:44.726358 1055021 cri.go:89] found id: ""
	I1208 01:59:44.726383 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.726392 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:44.726398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:44.726461 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:44.761403 1055021 cri.go:89] found id: ""
	I1208 01:59:44.761430 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.761440 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:44.761447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:44.761503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:44.792746 1055021 cri.go:89] found id: ""
	I1208 01:59:44.792771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.792780 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:44.792786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:44.792845 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:44.822139 1055021 cri.go:89] found id: ""
	I1208 01:59:44.822170 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.822179 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:44.822185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:44.822246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:44.848969 1055021 cri.go:89] found id: ""
	I1208 01:59:44.849036 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.849051 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:44.849060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:44.849123 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:44.877689 1055021 cri.go:89] found id: ""
	I1208 01:59:44.877712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.877720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:44.877727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:44.877792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:44.905370 1055021 cri.go:89] found id: ""
	I1208 01:59:44.905394 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.905403 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:44.905412 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:44.905424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.923373 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:44.923410 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:44.995648 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:44.995670 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:44.995684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:45.028693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:45.028744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:45.080489 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:45.080534 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:47.697315 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:47.707837 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:47.707910 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:47.731910 1055021 cri.go:89] found id: ""
	I1208 01:59:47.731934 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.731943 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:47.731950 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:47.732009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:47.765844 1055021 cri.go:89] found id: ""
	I1208 01:59:47.765869 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.765887 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:47.765894 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:47.765955 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:47.805305 1055021 cri.go:89] found id: ""
	I1208 01:59:47.805328 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.805342 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:47.805349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:47.805407 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:47.832547 1055021 cri.go:89] found id: ""
	I1208 01:59:47.832572 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.832581 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:47.832587 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:47.832646 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:47.857492 1055021 cri.go:89] found id: ""
	I1208 01:59:47.857517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.857526 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:47.857533 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:47.857595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:47.885564 1055021 cri.go:89] found id: ""
	I1208 01:59:47.885591 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.885599 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:47.885606 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:47.885668 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:47.914630 1055021 cri.go:89] found id: ""
	I1208 01:59:47.914655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.914664 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:47.914671 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:47.914737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:47.944185 1055021 cri.go:89] found id: ""
	I1208 01:59:47.944216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.944226 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:47.944236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:47.944247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:47.973585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:47.973622 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:48.011189 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:48.011218 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:48.078148 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:48.078187 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:48.098135 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:48.098167 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:48.174366 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:50.674625 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:50.685161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:50.685235 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:50.712131 1055021 cri.go:89] found id: ""
	I1208 01:59:50.712158 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.712167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:50.712175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:50.712236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:50.741188 1055021 cri.go:89] found id: ""
	I1208 01:59:50.741216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.741224 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:50.741231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:50.741325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:50.778993 1055021 cri.go:89] found id: ""
	I1208 01:59:50.779016 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.779026 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:50.779034 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:50.779103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:50.820444 1055021 cri.go:89] found id: ""
	I1208 01:59:50.820477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.820487 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:50.820494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:50.820552 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:50.845727 1055021 cri.go:89] found id: ""
	I1208 01:59:50.845752 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.845761 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:50.845768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:50.845833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:50.875375 1055021 cri.go:89] found id: ""
	I1208 01:59:50.875398 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.875406 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:50.875412 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:50.875472 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:50.899812 1055021 cri.go:89] found id: ""
	I1208 01:59:50.899836 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.899846 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:50.899852 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:50.899911 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:50.925692 1055021 cri.go:89] found id: ""
	I1208 01:59:50.925717 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.925725 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:50.925735 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:50.925751 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:50.991330 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:50.991366 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:51.010240 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:51.010276 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:51.075773 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:51.075801 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:51.075813 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:51.104705 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:51.104737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:53.634984 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:53.645378 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:53.645451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:53.676623 1055021 cri.go:89] found id: ""
	I1208 01:59:53.676647 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.676657 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:53.676664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:53.676723 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:53.700948 1055021 cri.go:89] found id: ""
	I1208 01:59:53.700973 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.700982 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:53.700988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:53.701047 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:53.725665 1055021 cri.go:89] found id: ""
	I1208 01:59:53.725689 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.725698 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:53.725704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:53.725760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:53.750770 1055021 cri.go:89] found id: ""
	I1208 01:59:53.750794 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.750803 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:53.750809 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:53.750885 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:53.784279 1055021 cri.go:89] found id: ""
	I1208 01:59:53.784304 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.784312 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:53.784319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:53.784378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:53.812355 1055021 cri.go:89] found id: ""
	I1208 01:59:53.812381 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.812390 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:53.812396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:53.812456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:53.837608 1055021 cri.go:89] found id: ""
	I1208 01:59:53.837634 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.837642 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:53.837648 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:53.837709 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:53.863046 1055021 cri.go:89] found id: ""
	I1208 01:59:53.863076 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.863085 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:53.863095 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:53.863136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:53.928268 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:53.928309 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:53.945830 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:53.945860 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:54.012382 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:54.012407 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:54.012447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:54.043446 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:54.043481 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:56.571785 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:56.582156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:56.582228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:56.611270 1055021 cri.go:89] found id: ""
	I1208 01:59:56.611292 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.611301 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:56.611307 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:56.611371 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:56.638765 1055021 cri.go:89] found id: ""
	I1208 01:59:56.638788 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.638797 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:56.638802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:56.638888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:56.663341 1055021 cri.go:89] found id: ""
	I1208 01:59:56.663368 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.663377 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:56.663383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:56.663495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:56.688606 1055021 cri.go:89] found id: ""
	I1208 01:59:56.688633 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.688643 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:56.688649 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:56.688730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:56.714263 1055021 cri.go:89] found id: ""
	I1208 01:59:56.714287 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.714296 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:56.714303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:56.714379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:56.738023 1055021 cri.go:89] found id: ""
	I1208 01:59:56.738047 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.738056 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:56.738062 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:56.738141 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:56.767926 1055021 cri.go:89] found id: ""
	I1208 01:59:56.767951 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.767960 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:56.767966 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:56.768071 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:56.801241 1055021 cri.go:89] found id: ""
	I1208 01:59:56.801268 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.801277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:56.801286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:56.801317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:56.873621 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:56.873657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:56.891086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:56.891116 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:56.956286 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:56.956306 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:56.956319 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:56.991921 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:56.991965 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.538010 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:59.548530 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:59.548598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:59.574677 1055021 cri.go:89] found id: ""
	I1208 01:59:59.574701 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.574709 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:59.574716 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:59.574779 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:59.600311 1055021 cri.go:89] found id: ""
	I1208 01:59:59.600337 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.600346 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:59.600352 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:59.600410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:59.627833 1055021 cri.go:89] found id: ""
	I1208 01:59:59.627858 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.627867 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:59.627873 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:59.627946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:59.652005 1055021 cri.go:89] found id: ""
	I1208 01:59:59.652029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.652038 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:59.652044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:59.652138 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:59.676487 1055021 cri.go:89] found id: ""
	I1208 01:59:59.676511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.676519 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:59.676525 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:59.676581 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:59.701988 1055021 cri.go:89] found id: ""
	I1208 01:59:59.702012 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.702020 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:59.702027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:59.702085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:59.726000 1055021 cri.go:89] found id: ""
	I1208 01:59:59.726025 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.726034 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:59.726040 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:59.726100 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:59.751097 1055021 cri.go:89] found id: ""
	I1208 01:59:59.751123 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.751131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:59.751141 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:59.751154 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:59.832931 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:59.832954 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:59.832966 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:59.862055 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:59.862089 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.890385 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:59.890414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:59.959793 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:59.959825 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.477852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:02.489201 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:02.489312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:02.516698 1055021 cri.go:89] found id: ""
	I1208 02:00:02.516725 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.516734 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:02.516741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:02.516825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:02.545938 1055021 cri.go:89] found id: ""
	I1208 02:00:02.545965 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.545974 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:02.545980 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:02.546051 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:02.574765 1055021 cri.go:89] found id: ""
	I1208 02:00:02.574799 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.574808 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:02.574815 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:02.574920 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:02.600958 1055021 cri.go:89] found id: ""
	I1208 02:00:02.600984 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.600992 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:02.601001 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:02.601061 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:02.627836 1055021 cri.go:89] found id: ""
	I1208 02:00:02.627862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.627872 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:02.627879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:02.627942 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:02.654803 1055021 cri.go:89] found id: ""
	I1208 02:00:02.654831 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.654864 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:02.654872 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:02.654938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:02.682455 1055021 cri.go:89] found id: ""
	I1208 02:00:02.682487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.682503 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:02.682510 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:02.682577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:02.709680 1055021 cri.go:89] found id: ""
	I1208 02:00:02.709709 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.709718 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:02.709728 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:02.709741 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:02.776682 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:02.776761 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.795697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:02.795794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:02.873752 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:02.873773 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:02.873787 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:02.903468 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:02.903511 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.438786 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:05.449615 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:05.449691 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:05.475122 1055021 cri.go:89] found id: ""
	I1208 02:00:05.475147 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.475156 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:05.475162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:05.475223 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:05.500749 1055021 cri.go:89] found id: ""
	I1208 02:00:05.500772 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.500781 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:05.500788 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:05.500854 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:05.526357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.526435 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.526456 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:05.526475 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:05.526564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:05.553466 1055021 cri.go:89] found id: ""
	I1208 02:00:05.553493 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.553502 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:05.553509 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:05.553570 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:05.583119 1055021 cri.go:89] found id: ""
	I1208 02:00:05.583145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.583154 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:05.583161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:05.583229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:05.613357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.613385 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.613394 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:05.613401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:05.613465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:05.639303 1055021 cri.go:89] found id: ""
	I1208 02:00:05.639328 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.639337 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:05.639358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:05.639422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:05.666333 1055021 cri.go:89] found id: ""
	I1208 02:00:05.666372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.666382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:05.666392 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:05.666405 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.696869 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:05.696901 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:05.762499 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:05.762536 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:05.780857 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:05.780889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:05.848522 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:05.848585 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:05.848598 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.377424 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:08.388192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:08.388265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:08.414029 1055021 cri.go:89] found id: ""
	I1208 02:00:08.414050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.414059 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:08.414065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:08.414127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:08.441760 1055021 cri.go:89] found id: ""
	I1208 02:00:08.441782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.441790 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:08.441796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:08.441857 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:08.466751 1055021 cri.go:89] found id: ""
	I1208 02:00:08.466774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.466783 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:08.466789 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:08.466870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:08.493249 1055021 cri.go:89] found id: ""
	I1208 02:00:08.493272 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.493280 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:08.493287 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:08.493345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:08.519677 1055021 cri.go:89] found id: ""
	I1208 02:00:08.519707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.519716 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:08.519722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:08.519788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:08.545435 1055021 cri.go:89] found id: ""
	I1208 02:00:08.545460 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.545469 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:08.545476 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:08.545538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:08.576588 1055021 cri.go:89] found id: ""
	I1208 02:00:08.576612 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.576621 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:08.576628 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:08.576719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:08.602665 1055021 cri.go:89] found id: ""
	I1208 02:00:08.602689 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.602697 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:08.602706 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:08.602737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:08.668015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:08.668065 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:08.685174 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:08.685203 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:08.750092 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:08.750113 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:08.750127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.781244 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:08.781278 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.323549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:11.333988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:11.334059 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:11.359294 1055021 cri.go:89] found id: ""
	I1208 02:00:11.359316 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.359325 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:11.359331 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:11.359391 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:11.385252 1055021 cri.go:89] found id: ""
	I1208 02:00:11.385274 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.385283 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:11.385289 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:11.385354 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:11.411462 1055021 cri.go:89] found id: ""
	I1208 02:00:11.411485 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.411494 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:11.411501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:11.411560 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:11.437020 1055021 cri.go:89] found id: ""
	I1208 02:00:11.437043 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.437052 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:11.437059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:11.437142 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:11.462749 1055021 cri.go:89] found id: ""
	I1208 02:00:11.462774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.462788 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:11.462795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:11.462912 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:11.487618 1055021 cri.go:89] found id: ""
	I1208 02:00:11.487642 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.487650 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:11.487656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:11.487738 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:11.517338 1055021 cri.go:89] found id: ""
	I1208 02:00:11.517411 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.517435 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:11.517454 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:11.517582 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:11.543576 1055021 cri.go:89] found id: ""
	I1208 02:00:11.543608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.543618 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:11.543670 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:11.543687 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:11.605714 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:11.605738 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:11.605754 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:11.634573 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:11.634608 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.663270 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:11.663297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:11.728036 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:11.728073 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.245900 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:14.259346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:14.259447 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:14.292891 1055021 cri.go:89] found id: ""
	I1208 02:00:14.292913 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.292922 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:14.292928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:14.292995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:14.326384 1055021 cri.go:89] found id: ""
	I1208 02:00:14.326408 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.326418 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:14.326425 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:14.326485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:14.354623 1055021 cri.go:89] found id: ""
	I1208 02:00:14.354646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.354654 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:14.354660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:14.354719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:14.382160 1055021 cri.go:89] found id: ""
	I1208 02:00:14.382187 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.382196 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:14.382203 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:14.382261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:14.408072 1055021 cri.go:89] found id: ""
	I1208 02:00:14.408141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.408166 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:14.408184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:14.408273 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:14.433739 1055021 cri.go:89] found id: ""
	I1208 02:00:14.433767 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.433776 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:14.433783 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:14.433889 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:14.460882 1055021 cri.go:89] found id: ""
	I1208 02:00:14.460906 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.460914 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:14.460921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:14.461002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:14.486630 1055021 cri.go:89] found id: ""
	I1208 02:00:14.486707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.486732 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:14.486755 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:14.486781 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:14.552732 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:14.552769 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.570940 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:14.570975 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:14.636277 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:14.636301 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:14.636317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:14.664410 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:14.664447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:17.192894 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:17.203129 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:17.203200 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:17.228497 1055021 cri.go:89] found id: ""
	I1208 02:00:17.228519 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.228528 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:17.228534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:17.228598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:17.253841 1055021 cri.go:89] found id: ""
	I1208 02:00:17.253862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.253871 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:17.253887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:17.253945 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:17.284067 1055021 cri.go:89] found id: ""
	I1208 02:00:17.284088 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.284097 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:17.284103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:17.284162 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:17.320641 1055021 cri.go:89] found id: ""
	I1208 02:00:17.320668 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.320678 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:17.320684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:17.320748 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:17.347071 1055021 cri.go:89] found id: ""
	I1208 02:00:17.347094 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.347103 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:17.347109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:17.347227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:17.373328 1055021 cri.go:89] found id: ""
	I1208 02:00:17.373357 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.373366 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:17.373372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:17.373439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:17.400408 1055021 cri.go:89] found id: ""
	I1208 02:00:17.400437 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.400446 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:17.400456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:17.400515 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:17.426232 1055021 cri.go:89] found id: ""
	I1208 02:00:17.426268 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.426277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:17.426286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:17.426298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:17.491052 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:17.491092 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:17.509546 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:17.509575 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:17.578008 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:17.578068 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:17.578090 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:17.606330 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:17.606368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:20.139003 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:20.149823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:20.149894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:20.176541 1055021 cri.go:89] found id: ""
	I1208 02:00:20.176568 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.176577 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:20.176583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:20.176647 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:20.209117 1055021 cri.go:89] found id: ""
	I1208 02:00:20.209141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.209149 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:20.209156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:20.209222 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:20.235819 1055021 cri.go:89] found id: ""
	I1208 02:00:20.235846 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.235861 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:20.235867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:20.235933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:20.268968 1055021 cri.go:89] found id: ""
	I1208 02:00:20.268997 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.269006 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:20.269019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:20.269079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:20.302684 1055021 cri.go:89] found id: ""
	I1208 02:00:20.302712 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.302721 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:20.302728 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:20.302814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:20.330459 1055021 cri.go:89] found id: ""
	I1208 02:00:20.330535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.330550 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:20.330557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:20.330632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:20.358743 1055021 cri.go:89] found id: ""
	I1208 02:00:20.358778 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.358787 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:20.358793 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:20.358881 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:20.384853 1055021 cri.go:89] found id: ""
	I1208 02:00:20.384883 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.384892 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:20.384909 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:20.384921 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:20.450466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:20.450505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:20.468842 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:20.468872 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:20.533689 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:20.533717 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:20.533732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:20.561211 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:20.561245 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.093217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:23.103855 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:23.103935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:23.129008 1055021 cri.go:89] found id: ""
	I1208 02:00:23.129084 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.129113 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:23.129122 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:23.129192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:23.154045 1055021 cri.go:89] found id: ""
	I1208 02:00:23.154071 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.154079 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:23.154086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:23.154144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:23.179982 1055021 cri.go:89] found id: ""
	I1208 02:00:23.180009 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.180018 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:23.180025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:23.180085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:23.205725 1055021 cri.go:89] found id: ""
	I1208 02:00:23.205751 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.205760 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:23.205767 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:23.205825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:23.233180 1055021 cri.go:89] found id: ""
	I1208 02:00:23.233206 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.233214 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:23.233221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:23.233280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:23.260814 1055021 cri.go:89] found id: ""
	I1208 02:00:23.260841 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.260850 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:23.260856 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:23.260915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:23.289337 1055021 cri.go:89] found id: ""
	I1208 02:00:23.289369 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.289379 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:23.289384 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:23.289451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:23.326356 1055021 cri.go:89] found id: ""
	I1208 02:00:23.326383 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.326392 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:23.326401 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:23.326414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:23.344175 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:23.344207 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:23.409693 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:23.409767 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:23.409793 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:23.437814 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:23.437848 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.472006 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:23.472034 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.036954 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:26.050218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:26.050295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:26.084077 1055021 cri.go:89] found id: ""
	I1208 02:00:26.084101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.084110 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:26.084117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:26.084179 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:26.115433 1055021 cri.go:89] found id: ""
	I1208 02:00:26.115458 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.115467 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:26.115473 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:26.115548 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:26.142798 1055021 cri.go:89] found id: ""
	I1208 02:00:26.142821 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.142829 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:26.142836 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:26.142923 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:26.169427 1055021 cri.go:89] found id: ""
	I1208 02:00:26.169449 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.169457 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:26.169465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:26.169523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:26.196837 1055021 cri.go:89] found id: ""
	I1208 02:00:26.196863 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.196873 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:26.196879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:26.196940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:26.222671 1055021 cri.go:89] found id: ""
	I1208 02:00:26.222694 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.222702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:26.222709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:26.222770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:26.258674 1055021 cri.go:89] found id: ""
	I1208 02:00:26.258696 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.258705 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:26.258711 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:26.258769 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:26.297463 1055021 cri.go:89] found id: ""
	I1208 02:00:26.297486 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.297496 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:26.297505 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:26.297520 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:26.329140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:26.329223 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:26.359625 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:26.359657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.424937 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:26.424974 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:26.443260 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:26.443293 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:26.509592 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.010492 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:29.023086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:29.023160 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:29.051358 1055021 cri.go:89] found id: ""
	I1208 02:00:29.051380 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.051389 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:29.051395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:29.051456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:29.085536 1055021 cri.go:89] found id: ""
	I1208 02:00:29.085566 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.085575 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:29.085583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:29.085649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:29.114380 1055021 cri.go:89] found id: ""
	I1208 02:00:29.114407 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.114416 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:29.114422 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:29.114483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:29.139608 1055021 cri.go:89] found id: ""
	I1208 02:00:29.139697 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.139713 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:29.139722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:29.139800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:29.167030 1055021 cri.go:89] found id: ""
	I1208 02:00:29.167055 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.167100 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:29.167107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:29.167173 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:29.191898 1055021 cri.go:89] found id: ""
	I1208 02:00:29.191920 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.191929 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:29.191935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:29.191992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:29.216839 1055021 cri.go:89] found id: ""
	I1208 02:00:29.216870 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.216879 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:29.216889 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:29.216975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:29.246347 1055021 cri.go:89] found id: ""
	I1208 02:00:29.246372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.246382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:29.246391 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:29.246421 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:29.266473 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:29.266509 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:29.345611 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.345636 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:29.345648 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:29.375020 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:29.375060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:29.402360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:29.402386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:31.967515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:31.978076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:31.978147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:32.018381 1055021 cri.go:89] found id: ""
	I1208 02:00:32.018457 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.018480 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:32.018500 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:32.018611 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:32.054678 1055021 cri.go:89] found id: ""
	I1208 02:00:32.054700 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.054709 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:32.054715 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:32.054775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:32.085659 1055021 cri.go:89] found id: ""
	I1208 02:00:32.085686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.085695 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:32.085701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:32.085809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:32.112827 1055021 cri.go:89] found id: ""
	I1208 02:00:32.112892 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.112907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:32.112914 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:32.112973 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:32.141486 1055021 cri.go:89] found id: ""
	I1208 02:00:32.141513 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.141521 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:32.141527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:32.141591 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:32.166463 1055021 cri.go:89] found id: ""
	I1208 02:00:32.166489 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.166498 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:32.166504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:32.166566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:32.196018 1055021 cri.go:89] found id: ""
	I1208 02:00:32.196086 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.196111 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:32.196125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:32.196198 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:32.219763 1055021 cri.go:89] found id: ""
	I1208 02:00:32.219802 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.219812 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:32.219821 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:32.219834 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:32.237401 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:32.237431 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:32.335697 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:32.335720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:32.335732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:32.364998 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:32.365043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:32.394072 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:32.394099 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:34.958230 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:34.968535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:34.968606 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:34.993490 1055021 cri.go:89] found id: ""
	I1208 02:00:34.993515 1055021 logs.go:282] 0 containers: []
	W1208 02:00:34.993524 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:34.993531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:34.993588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:35.026482 1055021 cri.go:89] found id: ""
	I1208 02:00:35.026511 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.026521 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:35.026529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:35.026595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:35.062109 1055021 cri.go:89] found id: ""
	I1208 02:00:35.062138 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.062147 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:35.062154 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:35.062218 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:35.094672 1055021 cri.go:89] found id: ""
	I1208 02:00:35.094706 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.094715 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:35.094722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:35.094784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:35.120981 1055021 cri.go:89] found id: ""
	I1208 02:00:35.121007 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.121016 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:35.121022 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:35.121087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:35.147283 1055021 cri.go:89] found id: ""
	I1208 02:00:35.147310 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.147321 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:35.147329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:35.147392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:35.174946 1055021 cri.go:89] found id: ""
	I1208 02:00:35.175038 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.175075 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:35.175115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:35.175224 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:35.205558 1055021 cri.go:89] found id: ""
	I1208 02:00:35.205583 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.205592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:35.205601 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:35.205636 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:35.273454 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:35.273537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:35.294102 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:35.294182 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:35.363206 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:35.363227 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:35.363240 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:35.391418 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:35.391457 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:37.922946 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:37.933320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:37.933392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:37.959213 1055021 cri.go:89] found id: ""
	I1208 02:00:37.959237 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.959247 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:37.959253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:37.959311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:37.983822 1055021 cri.go:89] found id: ""
	I1208 02:00:37.983844 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.983853 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:37.983859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:37.983917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:38.015881 1055021 cri.go:89] found id: ""
	I1208 02:00:38.015909 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.015919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:38.015927 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:38.015994 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:38.047948 1055021 cri.go:89] found id: ""
	I1208 02:00:38.047971 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.047979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:38.047985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:38.048049 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:38.098187 1055021 cri.go:89] found id: ""
	I1208 02:00:38.098216 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.098227 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:38.098234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:38.098298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:38.122930 1055021 cri.go:89] found id: ""
	I1208 02:00:38.122952 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.122960 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:38.122967 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:38.123028 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:38.148405 1055021 cri.go:89] found id: ""
	I1208 02:00:38.148439 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.148449 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:38.148455 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:38.148513 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:38.174446 1055021 cri.go:89] found id: ""
	I1208 02:00:38.174522 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.174544 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:38.174565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:38.174602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:38.239470 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:38.239505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:38.257924 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:38.258079 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:38.328235 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:38.328302 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:38.328321 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:38.356585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:38.356619 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:40.887527 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:40.897939 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:40.898011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:40.922663 1055021 cri.go:89] found id: ""
	I1208 02:00:40.922686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.922695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:40.922701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:40.922760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:40.947304 1055021 cri.go:89] found id: ""
	I1208 02:00:40.947371 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.947397 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:40.947409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:40.947484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:40.973263 1055021 cri.go:89] found id: ""
	I1208 02:00:40.973290 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.973299 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:40.973305 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:40.973365 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:40.998615 1055021 cri.go:89] found id: ""
	I1208 02:00:40.998648 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.998658 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:40.998665 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:40.998735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:41.034153 1055021 cri.go:89] found id: ""
	I1208 02:00:41.034180 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.034190 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:41.034196 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:41.034255 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:41.063886 1055021 cri.go:89] found id: ""
	I1208 02:00:41.063916 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.063925 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:41.063931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:41.063993 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:41.090937 1055021 cri.go:89] found id: ""
	I1208 02:00:41.090966 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.090976 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:41.090982 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:41.091046 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:41.117814 1055021 cri.go:89] found id: ""
	I1208 02:00:41.117839 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.117849 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:41.117858 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:41.117870 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:41.182312 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:41.182348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:41.200044 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:41.200071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:41.273066 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:41.273095 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:41.273108 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:41.308256 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:41.308298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:43.843380 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:43.854135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:43.854204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:43.879332 1055021 cri.go:89] found id: ""
	I1208 02:00:43.879356 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.879365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:43.879371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:43.879431 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:43.903897 1055021 cri.go:89] found id: ""
	I1208 02:00:43.903921 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.903930 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:43.903935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:43.904010 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:43.928349 1055021 cri.go:89] found id: ""
	I1208 02:00:43.928377 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.928386 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:43.928396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:43.928453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:43.957013 1055021 cri.go:89] found id: ""
	I1208 02:00:43.957046 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.957060 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:43.957066 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:43.957137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:43.981711 1055021 cri.go:89] found id: ""
	I1208 02:00:43.981784 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.981819 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:43.981843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:43.981933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:44.021808 1055021 cri.go:89] found id: ""
	I1208 02:00:44.021842 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.021851 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:44.021859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:44.021940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:44.053536 1055021 cri.go:89] found id: ""
	I1208 02:00:44.053608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.053631 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:44.053650 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:44.053735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:44.087893 1055021 cri.go:89] found id: ""
	I1208 02:00:44.087958 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.087975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:44.087985 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:44.087997 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:44.153453 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:44.153493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:44.172720 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:44.172750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:44.242553 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:44.242575 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:44.242587 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:44.273804 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:44.273889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:46.805601 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:46.815929 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:46.815999 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:46.840623 1055021 cri.go:89] found id: ""
	I1208 02:00:46.840646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.840655 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:46.840661 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:46.840721 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:46.866056 1055021 cri.go:89] found id: ""
	I1208 02:00:46.866082 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.866090 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:46.866096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:46.866156 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:46.890598 1055021 cri.go:89] found id: ""
	I1208 02:00:46.890623 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.890632 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:46.890638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:46.890699 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:46.917031 1055021 cri.go:89] found id: ""
	I1208 02:00:46.917101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.917125 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:46.917142 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:46.917230 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:46.941427 1055021 cri.go:89] found id: ""
	I1208 02:00:46.941450 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.941459 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:46.941465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:46.941524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:46.971991 1055021 cri.go:89] found id: ""
	I1208 02:00:46.972015 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.972024 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:46.972031 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:46.972087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:47.000365 1055021 cri.go:89] found id: ""
	I1208 02:00:47.000393 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.000402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:47.000409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:47.000500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:47.039853 1055021 cri.go:89] found id: ""
	I1208 02:00:47.039934 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.039968 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:47.040014 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:47.040070 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:47.124159 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:47.124199 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:47.142393 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:47.142436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:47.204667 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:47.204688 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:47.204700 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:47.233531 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:47.233572 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:49.777314 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:49.787953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:49.788027 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:49.814344 1055021 cri.go:89] found id: ""
	I1208 02:00:49.814368 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.814376 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:49.814383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:49.814443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:49.843148 1055021 cri.go:89] found id: ""
	I1208 02:00:49.843172 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.843180 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:49.843187 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:49.843245 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:49.868221 1055021 cri.go:89] found id: ""
	I1208 02:00:49.868245 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.868253 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:49.868260 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:49.868319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:49.892756 1055021 cri.go:89] found id: ""
	I1208 02:00:49.892782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.892792 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:49.892799 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:49.892879 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:49.921697 1055021 cri.go:89] found id: ""
	I1208 02:00:49.921730 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.921738 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:49.921745 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:49.921818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:49.946935 1055021 cri.go:89] found id: ""
	I1208 02:00:49.947000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.947018 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:49.947025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:49.947102 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:49.972386 1055021 cri.go:89] found id: ""
	I1208 02:00:49.972410 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.972418 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:49.972427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:49.972485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:49.997299 1055021 cri.go:89] found id: ""
	I1208 02:00:49.997324 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.997332 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:49.997342 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:49.997354 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:50.024427 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:50.024465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:50.106428 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:50.106452 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:50.106466 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:50.134825 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:50.134944 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:50.164257 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:50.164286 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:52.731852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:52.743466 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:52.743547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:52.770730 1055021 cri.go:89] found id: ""
	I1208 02:00:52.770754 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.770763 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:52.770769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:52.770837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:52.795524 1055021 cri.go:89] found id: ""
	I1208 02:00:52.795547 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.795555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:52.795562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:52.795622 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:52.820947 1055021 cri.go:89] found id: ""
	I1208 02:00:52.820976 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.820986 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:52.820993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:52.821054 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:52.846461 1055021 cri.go:89] found id: ""
	I1208 02:00:52.846487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.846495 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:52.846502 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:52.846614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:52.876556 1055021 cri.go:89] found id: ""
	I1208 02:00:52.876582 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.876591 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:52.876598 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:52.876658 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:52.902890 1055021 cri.go:89] found id: ""
	I1208 02:00:52.902915 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.902924 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:52.902931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:52.902995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:52.927861 1055021 cri.go:89] found id: ""
	I1208 02:00:52.927936 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.927952 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:52.927960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:52.928018 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:52.952070 1055021 cri.go:89] found id: ""
	I1208 02:00:52.952093 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.952102 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:52.952111 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:52.952123 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:52.969988 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:52.970071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:53.047400 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:53.047420 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:53.047432 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:53.079007 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:53.079096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:53.110493 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:53.110518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:55.678655 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:55.689237 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:55.689308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:55.716663 1055021 cri.go:89] found id: ""
	I1208 02:00:55.716685 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.716694 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:55.716700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:55.716767 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:55.742016 1055021 cri.go:89] found id: ""
	I1208 02:00:55.742042 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.742051 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:55.742057 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:55.742117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:55.771093 1055021 cri.go:89] found id: ""
	I1208 02:00:55.771116 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.771125 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:55.771131 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:55.771192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:55.795221 1055021 cri.go:89] found id: ""
	I1208 02:00:55.795243 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.795252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:55.795258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:55.795321 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:55.824380 1055021 cri.go:89] found id: ""
	I1208 02:00:55.824402 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.824411 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:55.824417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:55.824482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:55.853339 1055021 cri.go:89] found id: ""
	I1208 02:00:55.853362 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.853370 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:55.853376 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:55.853439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:55.879120 1055021 cri.go:89] found id: ""
	I1208 02:00:55.879145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.879154 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:55.879160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:55.879229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:55.904782 1055021 cri.go:89] found id: ""
	I1208 02:00:55.904811 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.904820 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:55.904829 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:55.904840 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:55.936603 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:55.936627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:56.002394 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:56.002436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:56.025805 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:56.025962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:56.100621 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:56.100643 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:56.100655 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:58.632608 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:58.643205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:58.643281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:58.668717 1055021 cri.go:89] found id: ""
	I1208 02:00:58.668741 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.668750 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:58.668756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:58.668818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:58.693510 1055021 cri.go:89] found id: ""
	I1208 02:00:58.693535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.693543 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:58.693550 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:58.693614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:58.718959 1055021 cri.go:89] found id: ""
	I1208 02:00:58.719050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.719071 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:58.719079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:58.719153 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:58.743668 1055021 cri.go:89] found id: ""
	I1208 02:00:58.743691 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.743700 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:58.743707 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:58.743765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:58.772612 1055021 cri.go:89] found id: ""
	I1208 02:00:58.772679 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.772700 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:58.772718 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:58.772809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:58.798178 1055021 cri.go:89] found id: ""
	I1208 02:00:58.798204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.798212 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:58.798218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:58.798278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:58.822926 1055021 cri.go:89] found id: ""
	I1208 02:00:58.823000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.823018 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:58.823026 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:58.823097 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:58.849170 1055021 cri.go:89] found id: ""
	I1208 02:00:58.849204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.849214 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:58.849249 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:58.849273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:58.916845 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:58.916884 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:58.934980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:58.935008 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:59.004330 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:59.004355 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:59.004368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:59.034521 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:59.034558 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.569349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:01.581275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:01.581356 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:01.614013 1055021 cri.go:89] found id: ""
	I1208 02:01:01.614040 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.614052 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:01.614059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:01.614120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:01.642283 1055021 cri.go:89] found id: ""
	I1208 02:01:01.642311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.642321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:01.642327 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:01.642388 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:01.668888 1055021 cri.go:89] found id: ""
	I1208 02:01:01.668916 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.668927 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:01.668933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:01.669045 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:01.696848 1055021 cri.go:89] found id: ""
	I1208 02:01:01.696890 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.696917 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:01.696924 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:01.697002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:01.724280 1055021 cri.go:89] found id: ""
	I1208 02:01:01.724314 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.724323 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:01.724329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:01.724397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:01.757961 1055021 cri.go:89] found id: ""
	I1208 02:01:01.757993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.758002 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:01.758009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:01.758076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:01.791626 1055021 cri.go:89] found id: ""
	I1208 02:01:01.791652 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.791663 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:01.791669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:01.791734 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:01.824543 1055021 cri.go:89] found id: ""
	I1208 02:01:01.824614 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.824631 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:01.824643 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:01.824656 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.858339 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:01.858368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:01.923001 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:01.923043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:01.942107 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:01.942139 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:02.016342 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:02.016379 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:02.016393 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.550723 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:04.561389 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:04.561458 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:04.587293 1055021 cri.go:89] found id: ""
	I1208 02:01:04.587319 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.587329 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:04.587335 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:04.587398 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:04.612287 1055021 cri.go:89] found id: ""
	I1208 02:01:04.612313 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.612321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:04.612328 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:04.612389 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:04.637981 1055021 cri.go:89] found id: ""
	I1208 02:01:04.638006 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.638016 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:04.638023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:04.638083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:04.666122 1055021 cri.go:89] found id: ""
	I1208 02:01:04.666150 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.666159 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:04.666166 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:04.666228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:04.691775 1055021 cri.go:89] found id: ""
	I1208 02:01:04.691799 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.691807 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:04.691813 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:04.691877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:04.716584 1055021 cri.go:89] found id: ""
	I1208 02:01:04.716610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.716619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:04.716626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:04.716684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:04.741247 1055021 cri.go:89] found id: ""
	I1208 02:01:04.741284 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.741297 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:04.741303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:04.741394 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:04.777041 1055021 cri.go:89] found id: ""
	I1208 02:01:04.777070 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.777079 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:04.777088 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:04.777100 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:04.797448 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:04.797478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:04.865442 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:04.865465 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:04.865478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.893232 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:04.893270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:04.921152 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:04.921183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.486177 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:07.496522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:07.496608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:07.521126 1055021 cri.go:89] found id: ""
	I1208 02:01:07.521202 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.521226 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:07.521244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:07.521333 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:07.549393 1055021 cri.go:89] found id: ""
	I1208 02:01:07.549458 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.549483 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:07.549501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:07.549585 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:07.575624 1055021 cri.go:89] found id: ""
	I1208 02:01:07.575699 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.575715 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:07.575722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:07.575784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:07.604231 1055021 cri.go:89] found id: ""
	I1208 02:01:07.604296 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.604310 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:07.604317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:07.604377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:07.629146 1055021 cri.go:89] found id: ""
	I1208 02:01:07.629177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.629186 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:07.629192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:07.629267 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:07.654573 1055021 cri.go:89] found id: ""
	I1208 02:01:07.654598 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.654607 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:07.654614 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:07.654682 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:07.679672 1055021 cri.go:89] found id: ""
	I1208 02:01:07.679746 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.679762 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:07.679769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:07.679841 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:07.705327 1055021 cri.go:89] found id: ""
	I1208 02:01:07.705353 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.705362 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:07.705371 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:07.705386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.770583 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:07.770665 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:07.788444 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:07.788473 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:07.862214 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:07.862236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:07.862248 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:07.891006 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:07.891043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.422919 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:10.433424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:10.433496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:10.458269 1055021 cri.go:89] found id: ""
	I1208 02:01:10.458295 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.458303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:10.458319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:10.458397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:10.485114 1055021 cri.go:89] found id: ""
	I1208 02:01:10.485138 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.485146 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:10.485152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:10.485211 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:10.512785 1055021 cri.go:89] found id: ""
	I1208 02:01:10.512808 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.512817 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:10.512823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:10.512884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:10.538032 1055021 cri.go:89] found id: ""
	I1208 02:01:10.538057 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.538066 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:10.538072 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:10.538130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:10.568288 1055021 cri.go:89] found id: ""
	I1208 02:01:10.568311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.568364 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:10.568379 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:10.568445 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:10.593987 1055021 cri.go:89] found id: ""
	I1208 02:01:10.594012 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.594021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:10.594028 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:10.594087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:10.619212 1055021 cri.go:89] found id: ""
	I1208 02:01:10.619237 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.619245 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:10.619251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:10.619311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:10.645349 1055021 cri.go:89] found id: ""
	I1208 02:01:10.645384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.645393 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:10.645402 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:10.645414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:10.707691 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:10.707713 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:10.707726 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:10.735113 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:10.735148 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.768113 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:10.768142 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:10.843634 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:10.843672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.362994 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:13.373991 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:13.374082 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:13.400090 1055021 cri.go:89] found id: ""
	I1208 02:01:13.400127 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.400136 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:13.400143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:13.400212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:13.425846 1055021 cri.go:89] found id: ""
	I1208 02:01:13.425872 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.425881 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:13.425887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:13.425949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:13.451450 1055021 cri.go:89] found id: ""
	I1208 02:01:13.451478 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.451487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:13.451493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:13.451554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:13.476315 1055021 cri.go:89] found id: ""
	I1208 02:01:13.476341 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.476350 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:13.476357 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:13.476419 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:13.503320 1055021 cri.go:89] found id: ""
	I1208 02:01:13.503346 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.503355 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:13.503362 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:13.503430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:13.528258 1055021 cri.go:89] found id: ""
	I1208 02:01:13.528290 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.528299 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:13.528306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:13.528375 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:13.553751 1055021 cri.go:89] found id: ""
	I1208 02:01:13.553784 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.553794 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:13.553800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:13.553871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:13.580159 1055021 cri.go:89] found id: ""
	I1208 02:01:13.580183 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.580192 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:13.580200 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:13.580212 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:13.649628 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:13.649678 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.668358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:13.668451 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:13.739767 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:13.739835 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:13.739881 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:13.771646 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:13.771684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.306613 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:16.317302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:16.317372 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:16.343331 1055021 cri.go:89] found id: ""
	I1208 02:01:16.343356 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.343365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:16.343374 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:16.343433 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:16.369486 1055021 cri.go:89] found id: ""
	I1208 02:01:16.369507 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.369516 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:16.369522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:16.369589 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:16.394887 1055021 cri.go:89] found id: ""
	I1208 02:01:16.394911 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.394919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:16.394926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:16.394983 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:16.419429 1055021 cri.go:89] found id: ""
	I1208 02:01:16.419453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.419461 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:16.419467 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:16.419532 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:16.447941 1055021 cri.go:89] found id: ""
	I1208 02:01:16.448014 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.448038 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:16.448060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:16.448137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:16.477380 1055021 cri.go:89] found id: ""
	I1208 02:01:16.477404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.477414 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:16.477420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:16.477479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:16.502633 1055021 cri.go:89] found id: ""
	I1208 02:01:16.502658 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.502667 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:16.502674 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:16.502776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:16.532861 1055021 cri.go:89] found id: ""
	I1208 02:01:16.532886 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.532895 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:16.532904 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:16.532943 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.561207 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:16.561235 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:16.629585 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:16.629623 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:16.647847 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:16.647876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:16.713384 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:16.713404 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:16.713417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.242742 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:19.253432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:19.253496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:19.282053 1055021 cri.go:89] found id: ""
	I1208 02:01:19.282075 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.282091 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:19.282097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:19.282154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:19.317196 1055021 cri.go:89] found id: ""
	I1208 02:01:19.317218 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.317226 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:19.317232 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:19.317291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:19.344133 1055021 cri.go:89] found id: ""
	I1208 02:01:19.344155 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.344164 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:19.344170 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:19.344231 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:19.369544 1055021 cri.go:89] found id: ""
	I1208 02:01:19.369567 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.369576 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:19.369582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:19.369641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:19.394138 1055021 cri.go:89] found id: ""
	I1208 02:01:19.394161 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.394170 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:19.394176 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:19.394234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:19.421882 1055021 cri.go:89] found id: ""
	I1208 02:01:19.421906 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.421915 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:19.421921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:19.421991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:19.447254 1055021 cri.go:89] found id: ""
	I1208 02:01:19.447280 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.447289 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:19.447295 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:19.447359 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:19.471872 1055021 cri.go:89] found id: ""
	I1208 02:01:19.471898 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.471907 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:19.471916 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:19.471929 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:19.537545 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:19.537583 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:19.556105 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:19.556134 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:19.617255 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:19.617275 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:19.617288 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.645378 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:19.645413 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.176988 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:22.187407 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:22.187482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:22.216526 1055021 cri.go:89] found id: ""
	I1208 02:01:22.216551 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.216560 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:22.216567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:22.216629 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:22.241409 1055021 cri.go:89] found id: ""
	I1208 02:01:22.241437 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.241446 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:22.241452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:22.241510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:22.275844 1055021 cri.go:89] found id: ""
	I1208 02:01:22.275873 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.275882 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:22.275888 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:22.275951 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:22.304532 1055021 cri.go:89] found id: ""
	I1208 02:01:22.304560 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.304575 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:22.304582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:22.304640 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:22.347626 1055021 cri.go:89] found id: ""
	I1208 02:01:22.347653 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.347663 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:22.347669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:22.347730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:22.374178 1055021 cri.go:89] found id: ""
	I1208 02:01:22.374205 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.374215 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:22.374221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:22.374280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:22.404202 1055021 cri.go:89] found id: ""
	I1208 02:01:22.404229 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.404238 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:22.404244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:22.404311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:22.429827 1055021 cri.go:89] found id: ""
	I1208 02:01:22.429852 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.429861 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:22.429869 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:22.429880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.461216 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:22.461241 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:22.529595 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:22.529634 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:22.547808 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:22.547841 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:22.614795 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:22.614824 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:22.614836 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.143485 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:25.154329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:25.154413 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:25.180079 1055021 cri.go:89] found id: ""
	I1208 02:01:25.180105 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.180114 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:25.180121 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:25.180180 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:25.204723 1055021 cri.go:89] found id: ""
	I1208 02:01:25.204753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.204761 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:25.204768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:25.204825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:25.229571 1055021 cri.go:89] found id: ""
	I1208 02:01:25.229596 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.229604 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:25.229611 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:25.229669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:25.256859 1055021 cri.go:89] found id: ""
	I1208 02:01:25.256888 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.256896 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:25.256903 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:25.256966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:25.286130 1055021 cri.go:89] found id: ""
	I1208 02:01:25.286159 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.286169 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:25.286175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:25.286240 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:25.316764 1055021 cri.go:89] found id: ""
	I1208 02:01:25.316797 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.316806 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:25.316819 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:25.316888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:25.343685 1055021 cri.go:89] found id: ""
	I1208 02:01:25.343753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.343781 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:25.343795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:25.343874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:25.368793 1055021 cri.go:89] found id: ""
	I1208 02:01:25.368819 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.368828 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:25.368864 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:25.368882 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:25.386567 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:25.386594 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:25.454148 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:25.454180 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:25.454193 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.482372 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:25.482406 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:25.512534 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:25.512561 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.077014 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:28.087810 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:28.087929 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:28.117064 1055021 cri.go:89] found id: ""
	I1208 02:01:28.117090 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.117100 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:28.117107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:28.117166 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:28.142720 1055021 cri.go:89] found id: ""
	I1208 02:01:28.142747 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.142756 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:28.142763 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:28.142820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:28.169323 1055021 cri.go:89] found id: ""
	I1208 02:01:28.169349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.169357 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:28.169364 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:28.169423 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:28.198413 1055021 cri.go:89] found id: ""
	I1208 02:01:28.198441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.198450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:28.198456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:28.198538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:28.222900 1055021 cri.go:89] found id: ""
	I1208 02:01:28.222925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.222935 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:28.222941 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:28.223006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:28.252429 1055021 cri.go:89] found id: ""
	I1208 02:01:28.252453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.252462 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:28.252468 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:28.252528 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:28.285260 1055021 cri.go:89] found id: ""
	I1208 02:01:28.285287 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.285296 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:28.285302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:28.285362 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:28.322093 1055021 cri.go:89] found id: ""
	I1208 02:01:28.322122 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.322131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:28.322140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:28.322151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:28.358086 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:28.358113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.422767 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:28.422811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:28.441151 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:28.441185 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:28.510892 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:28.510919 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:28.510932 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.041345 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:31.056282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:31.056357 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:31.087982 1055021 cri.go:89] found id: ""
	I1208 02:01:31.088007 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.088017 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:31.088023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:31.088086 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:31.113983 1055021 cri.go:89] found id: ""
	I1208 02:01:31.114005 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.114014 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:31.114025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:31.114083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:31.141045 1055021 cri.go:89] found id: ""
	I1208 02:01:31.141069 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.141078 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:31.141085 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:31.141154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:31.167841 1055021 cri.go:89] found id: ""
	I1208 02:01:31.167864 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.167873 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:31.167880 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:31.167937 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:31.193449 1055021 cri.go:89] found id: ""
	I1208 02:01:31.193471 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.193479 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:31.193485 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:31.193542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:31.220825 1055021 cri.go:89] found id: ""
	I1208 02:01:31.220850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.220859 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:31.220865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:31.220926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:31.246036 1055021 cri.go:89] found id: ""
	I1208 02:01:31.246063 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.246071 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:31.246077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:31.246140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:31.282360 1055021 cri.go:89] found id: ""
	I1208 02:01:31.282388 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.282396 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:31.282405 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:31.282416 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:31.351320 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:31.351368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:31.370774 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:31.370887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:31.434743 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:31.434763 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:31.434775 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.462946 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:31.462982 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:33.992261 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:34.004797 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:34.004891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:34.044483 1055021 cri.go:89] found id: ""
	I1208 02:01:34.044506 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.044516 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:34.044523 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:34.044598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:34.072528 1055021 cri.go:89] found id: ""
	I1208 02:01:34.072564 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.072573 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:34.072580 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:34.072654 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:34.102278 1055021 cri.go:89] found id: ""
	I1208 02:01:34.102357 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.102379 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:34.102399 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:34.102487 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:34.129526 1055021 cri.go:89] found id: ""
	I1208 02:01:34.129601 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.129634 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:34.129656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:34.129776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:34.155663 1055021 cri.go:89] found id: ""
	I1208 02:01:34.155689 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.155698 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:34.155704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:34.155777 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:34.186951 1055021 cri.go:89] found id: ""
	I1208 02:01:34.186978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.186988 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:34.186996 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:34.187104 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:34.212379 1055021 cri.go:89] found id: ""
	I1208 02:01:34.212404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.212423 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:34.212430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:34.212489 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:34.238401 1055021 cri.go:89] found id: ""
	I1208 02:01:34.238438 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.238447 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:34.238456 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:34.238468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:34.278895 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:34.278970 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:34.356262 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:34.356303 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:34.376513 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:34.376545 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:34.447804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:34.447829 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:34.447843 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:36.976756 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:36.987574 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:36.987651 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:37.035351 1055021 cri.go:89] found id: ""
	I1208 02:01:37.035376 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.035386 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:37.035393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:37.035457 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:37.065004 1055021 cri.go:89] found id: ""
	I1208 02:01:37.065026 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.065034 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:37.065041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:37.065099 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:37.092804 1055021 cri.go:89] found id: ""
	I1208 02:01:37.092828 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.092837 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:37.092843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:37.092901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:37.117820 1055021 cri.go:89] found id: ""
	I1208 02:01:37.117849 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.117857 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:37.117865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:37.117924 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:37.143955 1055021 cri.go:89] found id: ""
	I1208 02:01:37.143978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.143987 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:37.143993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:37.144055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:37.173740 1055021 cri.go:89] found id: ""
	I1208 02:01:37.173764 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.173772 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:37.173779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:37.173838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:37.202687 1055021 cri.go:89] found id: ""
	I1208 02:01:37.202710 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.202719 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:37.202725 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:37.202786 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:37.229307 1055021 cri.go:89] found id: ""
	I1208 02:01:37.229331 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.229339 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:37.229347 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:37.229360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:37.247500 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:37.247530 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:37.329229 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:37.329252 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:37.329267 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:37.358197 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:37.358238 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:37.387860 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:37.387889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:39.956266 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:39.966752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:39.966823 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:39.991660 1055021 cri.go:89] found id: ""
	I1208 02:01:39.991686 1055021 logs.go:282] 0 containers: []
	W1208 02:01:39.991695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:39.991701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:39.991763 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:40.027823 1055021 cri.go:89] found id: ""
	I1208 02:01:40.027905 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.027928 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:40.027949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:40.028063 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:40.064388 1055021 cri.go:89] found id: ""
	I1208 02:01:40.064464 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.064487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:40.064508 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:40.064594 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:40.094787 1055021 cri.go:89] found id: ""
	I1208 02:01:40.094814 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.094832 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:40.094858 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:40.094922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:40.120620 1055021 cri.go:89] found id: ""
	I1208 02:01:40.120645 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.120654 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:40.120660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:40.120720 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:40.153070 1055021 cri.go:89] found id: ""
	I1208 02:01:40.153097 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.153106 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:40.153112 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:40.153183 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:40.181896 1055021 cri.go:89] found id: ""
	I1208 02:01:40.181925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.181935 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:40.181942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:40.182004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:40.209414 1055021 cri.go:89] found id: ""
	I1208 02:01:40.209441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.209450 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:40.209459 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:40.209470 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:40.274756 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:40.274858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:40.294225 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:40.294364 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:40.365754 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:40.365778 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:40.365791 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:40.394699 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:40.394732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:42.924136 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:42.934800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:42.934894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:42.961825 1055021 cri.go:89] found id: ""
	I1208 02:01:42.961850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.961859 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:42.961867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:42.961927 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:42.988379 1055021 cri.go:89] found id: ""
	I1208 02:01:42.988403 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.988412 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:42.988418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:42.988503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:43.023024 1055021 cri.go:89] found id: ""
	I1208 02:01:43.023047 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.023056 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:43.023063 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:43.023139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:43.057964 1055021 cri.go:89] found id: ""
	I1208 02:01:43.057993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.058001 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:43.058008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:43.058073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:43.088198 1055021 cri.go:89] found id: ""
	I1208 02:01:43.088221 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.088229 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:43.088235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:43.088295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:43.116924 1055021 cri.go:89] found id: ""
	I1208 02:01:43.116950 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.116959 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:43.116965 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:43.117042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:43.143043 1055021 cri.go:89] found id: ""
	I1208 02:01:43.143156 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.143172 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:43.143180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:43.143274 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:43.172524 1055021 cri.go:89] found id: ""
	I1208 02:01:43.172547 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.172556 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:43.172565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:43.172577 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:43.237127 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:43.237162 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:43.256485 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:43.256516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:43.325704 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:43.325725 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:43.325737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:43.354439 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:43.354477 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:45.885598 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:45.896346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:45.896416 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:45.921473 1055021 cri.go:89] found id: ""
	I1208 02:01:45.921499 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.921508 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:45.921515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:45.921576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:45.945701 1055021 cri.go:89] found id: ""
	I1208 02:01:45.945725 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.945734 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:45.945740 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:45.945800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:45.973191 1055021 cri.go:89] found id: ""
	I1208 02:01:45.973213 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.973222 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:45.973228 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:45.973289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:45.999665 1055021 cri.go:89] found id: ""
	I1208 02:01:45.999741 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.999764 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:45.999782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:45.999872 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:46.041104 1055021 cri.go:89] found id: ""
	I1208 02:01:46.041176 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.041202 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:46.041224 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:46.041300 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:46.076259 1055021 cri.go:89] found id: ""
	I1208 02:01:46.076332 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.076355 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:46.076373 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:46.076450 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:46.108098 1055021 cri.go:89] found id: ""
	I1208 02:01:46.108163 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.108179 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:46.108186 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:46.108247 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:46.134928 1055021 cri.go:89] found id: ""
	I1208 02:01:46.134964 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.134974 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:46.134983 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:46.134995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:46.164421 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:46.164498 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:46.233311 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:46.233358 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:46.253422 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:46.253502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:46.336577 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:46.336600 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:46.336614 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:48.865787 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:48.876567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:48.876642 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:48.901147 1055021 cri.go:89] found id: ""
	I1208 02:01:48.901177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.901185 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:48.901192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:48.901250 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:48.927326 1055021 cri.go:89] found id: ""
	I1208 02:01:48.927351 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.927360 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:48.927366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:48.927424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:48.951970 1055021 cri.go:89] found id: ""
	I1208 02:01:48.951994 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.952003 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:48.952009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:48.952073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:48.976700 1055021 cri.go:89] found id: ""
	I1208 02:01:48.976724 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.976732 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:48.976739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:48.976796 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:49.005321 1055021 cri.go:89] found id: ""
	I1208 02:01:49.005349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.005359 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:49.005366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:49.005432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:49.045336 1055021 cri.go:89] found id: ""
	I1208 02:01:49.045359 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.045368 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:49.045397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:49.045478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:49.074970 1055021 cri.go:89] found id: ""
	I1208 02:01:49.074997 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.075006 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:49.075012 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:49.075070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:49.100757 1055021 cri.go:89] found id: ""
	I1208 02:01:49.100780 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.100788 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:49.100796 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:49.100808 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:49.165827 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:49.165862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:49.183539 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:49.183618 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:49.249850 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:49.249874 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:49.249887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:49.280238 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:49.280270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:51.819515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:51.830251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:51.830329 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:51.856077 1055021 cri.go:89] found id: ""
	I1208 02:01:51.856098 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.856107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:51.856113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:51.856170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:51.882057 1055021 cri.go:89] found id: ""
	I1208 02:01:51.882086 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.882096 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:51.882103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:51.882170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:51.908531 1055021 cri.go:89] found id: ""
	I1208 02:01:51.908572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.908582 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:51.908588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:51.908649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:51.933571 1055021 cri.go:89] found id: ""
	I1208 02:01:51.933594 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.933603 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:51.933610 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:51.933671 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:51.959716 1055021 cri.go:89] found id: ""
	I1208 02:01:51.959777 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.959800 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:51.959825 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:51.959903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:51.985320 1055021 cri.go:89] found id: ""
	I1208 02:01:51.985384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.985409 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:51.985427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:51.985507 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:52.029640 1055021 cri.go:89] found id: ""
	I1208 02:01:52.029709 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.029736 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:52.029756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:52.029835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:52.060725 1055021 cri.go:89] found id: ""
	I1208 02:01:52.060803 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.060826 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:52.060848 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:52.060874 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:52.129431 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:52.129468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:52.148064 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:52.148095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:52.220103 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:52.220125 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:52.220137 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:52.248853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:52.248892 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:54.781319 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:54.791942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:54.792009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:54.816799 1055021 cri.go:89] found id: ""
	I1208 02:01:54.816821 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.816830 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:54.816835 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:54.816893 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:54.846002 1055021 cri.go:89] found id: ""
	I1208 02:01:54.846028 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.846036 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:54.846043 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:54.846101 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:54.870704 1055021 cri.go:89] found id: ""
	I1208 02:01:54.870729 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.870737 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:54.870744 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:54.870807 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:54.897236 1055021 cri.go:89] found id: ""
	I1208 02:01:54.897302 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.897327 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:54.897347 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:54.897432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:54.921729 1055021 cri.go:89] found id: ""
	I1208 02:01:54.921754 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.921763 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:54.921769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:54.921830 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:54.949586 1055021 cri.go:89] found id: ""
	I1208 02:01:54.949610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.949619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:54.949626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:54.949687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:54.976595 1055021 cri.go:89] found id: ""
	I1208 02:01:54.976618 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.976627 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:54.976633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:54.976708 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:55.012149 1055021 cri.go:89] found id: ""
	I1208 02:01:55.012179 1055021 logs.go:282] 0 containers: []
	W1208 02:01:55.012188 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:55.012198 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:55.012211 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:55.089182 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:55.089225 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:55.107781 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:55.107811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:55.175880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:55.175942 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:55.175962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:55.205060 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:55.205095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:57.733634 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:57.744236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:57.744308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:57.769149 1055021 cri.go:89] found id: ""
	I1208 02:01:57.769173 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.769182 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:57.769188 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:57.769246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:57.796831 1055021 cri.go:89] found id: ""
	I1208 02:01:57.796860 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.796869 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:57.796876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:57.796932 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:57.821809 1055021 cri.go:89] found id: ""
	I1208 02:01:57.821834 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.821844 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:57.821850 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:57.821917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:57.849385 1055021 cri.go:89] found id: ""
	I1208 02:01:57.849410 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.849418 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:57.849424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:57.849481 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:57.874645 1055021 cri.go:89] found id: ""
	I1208 02:01:57.874669 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.874678 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:57.874684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:57.874742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:57.899500 1055021 cri.go:89] found id: ""
	I1208 02:01:57.899572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.899608 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:57.899623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:57.899695 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:57.926677 1055021 cri.go:89] found id: ""
	I1208 02:01:57.926711 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.926720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:57.926727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:57.926833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:57.952159 1055021 cri.go:89] found id: ""
	I1208 02:01:57.952233 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.952249 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:57.952259 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:57.952271 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:58.017945 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:58.018082 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:58.036702 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:58.036877 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:58.109217 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:58.109239 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:58.109252 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:58.137424 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:58.137460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:00.669211 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:00.679729 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:00.679803 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:00.704116 1055021 cri.go:89] found id: ""
	I1208 02:02:00.704140 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.704149 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:00.704156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:00.704220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:00.728883 1055021 cri.go:89] found id: ""
	I1208 02:02:00.728908 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.728917 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:00.728923 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:00.728984 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:00.757361 1055021 cri.go:89] found id: ""
	I1208 02:02:00.757437 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.757453 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:00.757461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:00.757523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:00.784303 1055021 cri.go:89] found id: ""
	I1208 02:02:00.784332 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.784342 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:00.784349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:00.784420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:00.814794 1055021 cri.go:89] found id: ""
	I1208 02:02:00.814818 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.814827 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:00.814833 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:00.814915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:00.840985 1055021 cri.go:89] found id: ""
	I1208 02:02:00.841052 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.841069 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:00.841077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:00.841140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:00.869242 1055021 cri.go:89] found id: ""
	I1208 02:02:00.869268 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.869277 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:00.869283 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:00.869348 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:00.895515 1055021 cri.go:89] found id: ""
	I1208 02:02:00.895540 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.895549 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:00.895557 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:00.895600 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:00.963574 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:00.963611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:00.981868 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:00.981900 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:01.074452 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:01.074541 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:01.074602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:01.107635 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:01.107672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:03.643395 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:03.654301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:03.654370 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:03.680571 1055021 cri.go:89] found id: ""
	I1208 02:02:03.680609 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.680619 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:03.680626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:03.680696 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:03.709419 1055021 cri.go:89] found id: ""
	I1208 02:02:03.709444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.709453 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:03.709459 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:03.709518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:03.736028 1055021 cri.go:89] found id: ""
	I1208 02:02:03.736064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.736073 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:03.736079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:03.736140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:03.760906 1055021 cri.go:89] found id: ""
	I1208 02:02:03.760983 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.761005 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:03.761019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:03.761095 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:03.789527 1055021 cri.go:89] found id: ""
	I1208 02:02:03.789563 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.789572 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:03.789578 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:03.789655 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:03.817176 1055021 cri.go:89] found id: ""
	I1208 02:02:03.817203 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.817211 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:03.817218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:03.817277 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:03.847025 1055021 cri.go:89] found id: ""
	I1208 02:02:03.847053 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.847063 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:03.847070 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:03.847161 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:03.872945 1055021 cri.go:89] found id: ""
	I1208 02:02:03.872972 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.872981 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:03.872990 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:03.873002 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:03.938890 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:03.938927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:03.956669 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:03.956699 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:04.047856 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:04.047931 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:04.047960 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:04.084291 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:04.084328 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:06.621579 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:06.632180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:06.632262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:06.658187 1055021 cri.go:89] found id: ""
	I1208 02:02:06.658214 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.658223 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:06.658230 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:06.658289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:06.683455 1055021 cri.go:89] found id: ""
	I1208 02:02:06.683479 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.683487 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:06.683494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:06.683555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:06.709121 1055021 cri.go:89] found id: ""
	I1208 02:02:06.709147 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.709156 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:06.709162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:06.709220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:06.735601 1055021 cri.go:89] found id: ""
	I1208 02:02:06.735639 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.735649 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:06.735655 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:06.735717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:06.761793 1055021 cri.go:89] found id: ""
	I1208 02:02:06.761817 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.761826 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:06.761832 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:06.761891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:06.787053 1055021 cri.go:89] found id: ""
	I1208 02:02:06.787075 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.787092 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:06.787099 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:06.787168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:06.815964 1055021 cri.go:89] found id: ""
	I1208 02:02:06.815990 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.815999 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:06.816006 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:06.816067 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:06.841508 1055021 cri.go:89] found id: ""
	I1208 02:02:06.841534 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.841543 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:06.841552 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:06.841564 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:06.906588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:06.906627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:06.925347 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:06.925380 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:07.004820 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:07.004851 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:07.004865 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:07.038308 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:07.038348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.573053 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:09.583792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:09.583864 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:09.611232 1055021 cri.go:89] found id: ""
	I1208 02:02:09.611255 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.611265 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:09.611271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:09.611340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:09.636029 1055021 cri.go:89] found id: ""
	I1208 02:02:09.636054 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.636063 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:09.636069 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:09.636127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:09.662307 1055021 cri.go:89] found id: ""
	I1208 02:02:09.662334 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.662344 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:09.662350 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:09.662430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:09.688279 1055021 cri.go:89] found id: ""
	I1208 02:02:09.688304 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.688314 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:09.688320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:09.688385 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:09.717056 1055021 cri.go:89] found id: ""
	I1208 02:02:09.717081 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.717090 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:09.717097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:09.717206 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:09.745719 1055021 cri.go:89] found id: ""
	I1208 02:02:09.745744 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.745753 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:09.745760 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:09.745820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:09.774995 1055021 cri.go:89] found id: ""
	I1208 02:02:09.775020 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.775029 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:09.775035 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:09.775107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:09.800142 1055021 cri.go:89] found id: ""
	I1208 02:02:09.800165 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.800174 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:09.800183 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:09.800196 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:09.817474 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:09.817504 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:09.881166 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:09.881188 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:09.881201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:09.909282 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:09.909316 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.936890 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:09.936917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:12.504767 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:12.517010 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:12.517087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:12.552375 1055021 cri.go:89] found id: ""
	I1208 02:02:12.552405 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.552414 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:12.552421 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:12.552484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:12.581970 1055021 cri.go:89] found id: ""
	I1208 02:02:12.581993 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.582002 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:12.582008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:12.582070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:12.609191 1055021 cri.go:89] found id: ""
	I1208 02:02:12.609215 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.609223 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:12.609229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:12.609289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:12.634872 1055021 cri.go:89] found id: ""
	I1208 02:02:12.634900 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.634909 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:12.634917 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:12.634977 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:12.660600 1055021 cri.go:89] found id: ""
	I1208 02:02:12.660622 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.660631 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:12.660637 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:12.660698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:12.686371 1055021 cri.go:89] found id: ""
	I1208 02:02:12.686394 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.686402 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:12.686409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:12.686468 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:12.711549 1055021 cri.go:89] found id: ""
	I1208 02:02:12.711574 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.711583 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:12.711589 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:12.711650 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:12.736572 1055021 cri.go:89] found id: ""
	I1208 02:02:12.736599 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.736609 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:12.736619 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:12.736631 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:12.754919 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:12.754947 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:12.825472 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:12.825494 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:12.825508 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:12.854189 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:12.854226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:12.881205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:12.881233 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:15.446588 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:15.457588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:15.457660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:15.482738 1055021 cri.go:89] found id: ""
	I1208 02:02:15.482763 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.482772 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:15.482779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:15.482877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:15.511332 1055021 cri.go:89] found id: ""
	I1208 02:02:15.511364 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.511373 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:15.511380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:15.511446 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:15.555502 1055021 cri.go:89] found id: ""
	I1208 02:02:15.555528 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.555537 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:15.555543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:15.555604 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:15.584568 1055021 cri.go:89] found id: ""
	I1208 02:02:15.584590 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.584598 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:15.584604 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:15.584662 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:15.613196 1055021 cri.go:89] found id: ""
	I1208 02:02:15.613219 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.613228 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:15.613234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:15.613299 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:15.642375 1055021 cri.go:89] found id: ""
	I1208 02:02:15.642396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.642404 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:15.642411 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:15.642469 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:15.666701 1055021 cri.go:89] found id: ""
	I1208 02:02:15.666724 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.666733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:15.666739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:15.666804 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:15.694203 1055021 cri.go:89] found id: ""
	I1208 02:02:15.694226 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.694235 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:15.694244 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:15.694256 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:15.711985 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:15.712018 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:15.783845 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:15.783867 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:15.783880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:15.812138 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:15.812172 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:15.841785 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:15.841815 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.407879 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:18.418616 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:18.418687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:18.452125 1055021 cri.go:89] found id: ""
	I1208 02:02:18.452149 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.452158 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:18.452165 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:18.452226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:18.484590 1055021 cri.go:89] found id: ""
	I1208 02:02:18.484618 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.484627 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:18.484633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:18.484693 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:18.521073 1055021 cri.go:89] found id: ""
	I1208 02:02:18.521101 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.521111 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:18.521117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:18.521195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:18.552106 1055021 cri.go:89] found id: ""
	I1208 02:02:18.552131 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.552142 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:18.552149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:18.552234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:18.583000 1055021 cri.go:89] found id: ""
	I1208 02:02:18.583026 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.583034 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:18.583041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:18.583108 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:18.608873 1055021 cri.go:89] found id: ""
	I1208 02:02:18.608901 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.608909 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:18.608916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:18.608975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:18.638459 1055021 cri.go:89] found id: ""
	I1208 02:02:18.638482 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.638491 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:18.638497 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:18.638554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:18.664652 1055021 cri.go:89] found id: ""
	I1208 02:02:18.664678 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.664687 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:18.664696 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:18.664708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:18.727887 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:18.727909 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:18.727922 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:18.756733 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:18.756768 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:18.784791 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:18.784819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.854704 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:18.854747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.373144 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:21.384002 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:21.384076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:21.408827 1055021 cri.go:89] found id: ""
	I1208 02:02:21.408851 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.408860 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:21.408866 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:21.408926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:21.437335 1055021 cri.go:89] found id: ""
	I1208 02:02:21.437366 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.437375 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:21.437380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:21.437440 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:21.461726 1055021 cri.go:89] found id: ""
	I1208 02:02:21.461753 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.461762 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:21.461768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:21.461827 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:21.486068 1055021 cri.go:89] found id: ""
	I1208 02:02:21.486095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.486104 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:21.486110 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:21.486168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:21.521646 1055021 cri.go:89] found id: ""
	I1208 02:02:21.521671 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.521679 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:21.521686 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:21.521754 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:21.549687 1055021 cri.go:89] found id: ""
	I1208 02:02:21.549714 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.549723 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:21.549730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:21.549789 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:21.584524 1055021 cri.go:89] found id: ""
	I1208 02:02:21.584600 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.584615 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:21.584623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:21.584686 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:21.613834 1055021 cri.go:89] found id: ""
	I1208 02:02:21.613859 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.613868 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:21.613877 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:21.613888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:21.679269 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:21.679305 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.696894 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:21.696924 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:21.763490 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:21.763525 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:21.763538 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:21.791788 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:21.791819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.320943 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:24.332441 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:24.332511 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:24.359381 1055021 cri.go:89] found id: ""
	I1208 02:02:24.359403 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.359412 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:24.359418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:24.359484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:24.385766 1055021 cri.go:89] found id: ""
	I1208 02:02:24.385789 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.385798 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:24.385804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:24.385870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:24.412597 1055021 cri.go:89] found id: ""
	I1208 02:02:24.412619 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.412633 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:24.412640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:24.412700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:24.438239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.438262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.438270 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:24.438277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:24.438336 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:24.465529 1055021 cri.go:89] found id: ""
	I1208 02:02:24.465551 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.465560 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:24.465566 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:24.465628 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:24.490130 1055021 cri.go:89] found id: ""
	I1208 02:02:24.490153 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.490162 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:24.490168 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:24.490228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:24.531239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.531262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.531271 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:24.531277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:24.531335 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:24.570624 1055021 cri.go:89] found id: ""
	I1208 02:02:24.570646 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.570654 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:24.570663 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:24.570676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:24.588822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:24.588852 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:24.650804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:24.650826 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:24.650858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:24.680022 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:24.680060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.708316 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:24.708352 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.274217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:27.287664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:27.287788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:27.318113 1055021 cri.go:89] found id: ""
	I1208 02:02:27.318193 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.318215 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:27.318234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:27.318332 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:27.344915 1055021 cri.go:89] found id: ""
	I1208 02:02:27.344943 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.344951 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:27.344958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:27.345024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:27.374469 1055021 cri.go:89] found id: ""
	I1208 02:02:27.374502 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.374512 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:27.374519 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:27.374588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:27.399626 1055021 cri.go:89] found id: ""
	I1208 02:02:27.399665 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.399674 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:27.399680 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:27.399753 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:27.429184 1055021 cri.go:89] found id: ""
	I1208 02:02:27.429222 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.429230 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:27.429236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:27.429303 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:27.453872 1055021 cri.go:89] found id: ""
	I1208 02:02:27.453910 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.453919 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:27.453926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:27.453996 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:27.479093 1055021 cri.go:89] found id: ""
	I1208 02:02:27.479117 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.479127 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:27.479134 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:27.479195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:27.513793 1055021 cri.go:89] found id: ""
	I1208 02:02:27.513820 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.513840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:27.513849 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:27.513862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:27.543879 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:27.543958 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:27.585714 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:27.585783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.651465 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:27.651502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:27.669169 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:27.669201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:27.732840 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.233103 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:30.244434 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:30.244504 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:30.286359 1055021 cri.go:89] found id: ""
	I1208 02:02:30.286381 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.286390 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:30.286396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:30.286455 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:30.317925 1055021 cri.go:89] found id: ""
	I1208 02:02:30.317947 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.317955 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:30.317960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:30.318020 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:30.352522 1055021 cri.go:89] found id: ""
	I1208 02:02:30.352543 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.352551 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:30.352557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:30.352619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:30.376895 1055021 cri.go:89] found id: ""
	I1208 02:02:30.376917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.376925 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:30.376932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:30.376989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:30.401457 1055021 cri.go:89] found id: ""
	I1208 02:02:30.401478 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.401487 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:30.401493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:30.401551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:30.428269 1055021 cri.go:89] found id: ""
	I1208 02:02:30.428291 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.428300 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:30.428306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:30.428366 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:30.452846 1055021 cri.go:89] found id: ""
	I1208 02:02:30.452869 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.452878 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:30.452884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:30.452946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:30.477617 1055021 cri.go:89] found id: ""
	I1208 02:02:30.477645 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.477655 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:30.477665 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:30.477676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:30.507758 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:30.507782 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:30.577724 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:30.577802 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:30.598108 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:30.598136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:30.663869 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.663892 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:30.663905 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.192012 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:33.202802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:33.202903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:33.229607 1055021 cri.go:89] found id: ""
	I1208 02:02:33.229629 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.229638 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:33.229645 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:33.229704 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:33.257802 1055021 cri.go:89] found id: ""
	I1208 02:02:33.257837 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.257847 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:33.257854 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:33.257913 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:33.289073 1055021 cri.go:89] found id: ""
	I1208 02:02:33.289095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.289103 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:33.289113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:33.289171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:33.317039 1055021 cri.go:89] found id: ""
	I1208 02:02:33.317060 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.317069 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:33.317075 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:33.317137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:33.342479 1055021 cri.go:89] found id: ""
	I1208 02:02:33.342500 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.342509 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:33.342515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:33.342577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:33.367849 1055021 cri.go:89] found id: ""
	I1208 02:02:33.367877 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.367886 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:33.367892 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:33.367950 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:33.393711 1055021 cri.go:89] found id: ""
	I1208 02:02:33.393739 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.393748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:33.393755 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:33.393818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:33.419264 1055021 cri.go:89] found id: ""
	I1208 02:02:33.419286 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.419295 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:33.419303 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:33.419320 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.446586 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:33.446620 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:33.474605 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:33.474633 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:33.546521 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:33.546562 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:33.567522 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:33.567553 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:33.633164 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.133387 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:36.145051 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:36.145130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:36.178396 1055021 cri.go:89] found id: ""
	I1208 02:02:36.178426 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.178434 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:36.178442 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:36.178500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:36.204662 1055021 cri.go:89] found id: ""
	I1208 02:02:36.204685 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.204694 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:36.204700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:36.204758 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:36.233744 1055021 cri.go:89] found id: ""
	I1208 02:02:36.233766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.233776 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:36.233782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:36.233844 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:36.271413 1055021 cri.go:89] found id: ""
	I1208 02:02:36.271436 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.271445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:36.271453 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:36.271518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:36.299867 1055021 cri.go:89] found id: ""
	I1208 02:02:36.299889 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.299898 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:36.299905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:36.299967 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:36.333748 1055021 cri.go:89] found id: ""
	I1208 02:02:36.333771 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.333779 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:36.333786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:36.333877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:36.359920 1055021 cri.go:89] found id: ""
	I1208 02:02:36.359944 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.359953 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:36.359959 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:36.360016 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:36.384561 1055021 cri.go:89] found id: ""
	I1208 02:02:36.384583 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.384592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:36.384600 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:36.384611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:36.449118 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:36.449153 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:36.469510 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:36.469537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:36.544911 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.544934 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:36.544972 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:36.577604 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:36.577640 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.106569 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:39.117314 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:39.117406 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:39.147330 1055021 cri.go:89] found id: ""
	I1208 02:02:39.147354 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.147362 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:39.147369 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:39.147429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:39.175702 1055021 cri.go:89] found id: ""
	I1208 02:02:39.175725 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.175733 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:39.175739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:39.175797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:39.209892 1055021 cri.go:89] found id: ""
	I1208 02:02:39.209917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.209926 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:39.209932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:39.209990 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:39.235210 1055021 cri.go:89] found id: ""
	I1208 02:02:39.235239 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.235248 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:39.235255 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:39.235312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:39.268421 1055021 cri.go:89] found id: ""
	I1208 02:02:39.268444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.268453 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:39.268460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:39.268520 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:39.308045 1055021 cri.go:89] found id: ""
	I1208 02:02:39.308070 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.308079 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:39.308086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:39.308152 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:39.338659 1055021 cri.go:89] found id: ""
	I1208 02:02:39.338684 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.338693 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:39.338699 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:39.338759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:39.369373 1055021 cri.go:89] found id: ""
	I1208 02:02:39.369396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.369405 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:39.369414 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:39.369426 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.401929 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:39.401959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:39.466665 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:39.466705 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:39.484758 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:39.484786 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:39.570718 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:39.570737 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:39.570750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.101949 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:42.135199 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:42.135361 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:42.190279 1055021 cri.go:89] found id: ""
	I1208 02:02:42.190367 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.190393 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:42.190415 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:42.190545 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:42.222777 1055021 cri.go:89] found id: ""
	I1208 02:02:42.222883 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.222911 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:42.222934 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:42.223043 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:42.257086 1055021 cri.go:89] found id: ""
	I1208 02:02:42.257169 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.257193 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:42.257217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:42.257340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:42.290338 1055021 cri.go:89] found id: ""
	I1208 02:02:42.290421 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.290445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:42.290464 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:42.290571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:42.321497 1055021 cri.go:89] found id: ""
	I1208 02:02:42.321567 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.321592 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:42.321612 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:42.321710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:42.351037 1055021 cri.go:89] found id: ""
	I1208 02:02:42.351157 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.351184 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:42.351205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:42.351308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:42.377225 1055021 cri.go:89] found id: ""
	I1208 02:02:42.377251 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.377259 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:42.377266 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:42.377324 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:42.403038 1055021 cri.go:89] found id: ""
	I1208 02:02:42.403064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.403073 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:42.403117 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:42.403130 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:42.468670 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:42.468709 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:42.486822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:42.486906 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:42.576804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:42.576828 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:42.576844 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.609307 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:42.609345 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:45.139048 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:45.153298 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:45.153393 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:45.190816 1055021 cri.go:89] found id: ""
	I1208 02:02:45.190864 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.190874 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:45.190882 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:45.190954 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:45.248053 1055021 cri.go:89] found id: ""
	I1208 02:02:45.248087 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.248097 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:45.248105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:45.248178 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:45.291403 1055021 cri.go:89] found id: ""
	I1208 02:02:45.291441 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.291506 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:45.291539 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:45.291685 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:45.327809 1055021 cri.go:89] found id: ""
	I1208 02:02:45.327885 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.327907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:45.327925 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:45.328011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:45.356269 1055021 cri.go:89] found id: ""
	I1208 02:02:45.356293 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.356302 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:45.356308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:45.356386 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:45.385189 1055021 cri.go:89] found id: ""
	I1208 02:02:45.385213 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.385222 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:45.385229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:45.385309 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:45.413524 1055021 cri.go:89] found id: ""
	I1208 02:02:45.413549 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.413558 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:45.413565 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:45.413652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:45.443469 1055021 cri.go:89] found id: ""
	I1208 02:02:45.443547 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.443563 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:45.443572 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:45.443584 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:45.515350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:45.515441 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:45.534931 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:45.534961 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:45.612239 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:45.612262 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:45.612274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:45.640465 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:45.640503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.170309 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:48.181762 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:48.181835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:48.209264 1055021 cri.go:89] found id: ""
	I1208 02:02:48.209288 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.209297 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:48.209303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:48.209364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:48.236743 1055021 cri.go:89] found id: ""
	I1208 02:02:48.236766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.236775 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:48.236782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:48.236847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:48.275731 1055021 cri.go:89] found id: ""
	I1208 02:02:48.275757 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.275765 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:48.275772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:48.275837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:48.311639 1055021 cri.go:89] found id: ""
	I1208 02:02:48.311667 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.311676 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:48.311682 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:48.311744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:48.342675 1055021 cri.go:89] found id: ""
	I1208 02:02:48.342711 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.342720 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:48.342726 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:48.342808 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:48.369485 1055021 cri.go:89] found id: ""
	I1208 02:02:48.369519 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.369528 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:48.369535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:48.369608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:48.396744 1055021 cri.go:89] found id: ""
	I1208 02:02:48.396769 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.396778 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:48.396785 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:48.396847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:48.422870 1055021 cri.go:89] found id: ""
	I1208 02:02:48.422894 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.422904 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:48.422913 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:48.422927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.454409 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:48.454482 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:48.522366 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:48.522456 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:48.541233 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:48.541391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:48.617160 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:48.617226 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:48.617247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:51.146382 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:51.160619 1055021 out.go:203] 
	W1208 02:02:51.163425 1055021 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 02:02:51.163473 1055021 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 02:02:51.163484 1055021 out.go:285] * Related issues:
	W1208 02:02:51.163498 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 02:02:51.163517 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 02:02:51.166282 1055021 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317270944Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317325255Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317374683Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317435303Z" level=info msg="RDT not available in the host system"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317500313Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318427518Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318519039Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318582121Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319471993Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319585217Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319774265Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.320528701Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321124572Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321312036Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371792319Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371951033Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372008469Z" level=info msg="Create NRI interface"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372105816Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372118829Z" level=info msg="runtime interface created"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372130251Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372136659Z" level=info msg="runtime interface starting up..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372142583Z" level=info msg="starting plugins..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372154743Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372216209Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:56:47 newest-cni-448023 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:03:00.826702   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:00.827155   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:00.828988   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:00.829376   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:00.831189   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:03:00 up  6:45,  0 user,  load average: 1.32, 0.80, 1.12
	Linux newest-cni-448023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:02:58 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13737]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13737]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13737]: E1208 02:02:59.071628   13737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13751]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13751]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:02:59 newest-cni-448023 kubelet[13751]: E1208 02:02:59.817503   13751 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:59 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:03:00 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 08 02:03:00 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:00 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:00 newest-cni-448023 kubelet[13775]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:00 newest-cni-448023 kubelet[13775]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:00 newest-cni-448023 kubelet[13775]: E1208 02:03:00.582097   13775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:03:00 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:03:00 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (359.613846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-448023" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-448023
helpers_test.go:243: (dbg) docker inspect newest-cni-448023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	        "Created": "2025-12-08T01:46:34.353152924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1055155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:56:41.277432033Z",
	            "FinishedAt": "2025-12-08T01:56:39.892982826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/hosts",
	        "LogPath": "/var/lib/docker/containers/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9/ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9-json.log",
	        "Name": "/newest-cni-448023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-448023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-448023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ff1a1ad3010fdc49d0b4eeae8e8c3d92bd3662eb2f0a56a75d0dd31ce023eaf9",
	                "LowerDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b02e56c728dfb4a3b3ed61d68df181ba8774443b73cfd04055f7daf35fa5b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-448023",
	                "Source": "/var/lib/docker/volumes/newest-cni-448023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-448023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-448023",
	                "name.minikube.sigs.k8s.io": "newest-cni-448023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "813118b42480babba062786ba0ba8ff3e7452eec7c2d8f800688d8fd68359617",
	            "SandboxKey": "/var/run/docker/netns/813118b42480",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-448023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:9d:8d:8a:21:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec5af7f0fdbc70a95f83d97d8a04145286c7acd7e864f0f850cd22983b469ab7",
	                    "EndpointID": "577f657908aa7f309cdfc5d98526f00d0b1c5b25cb769be3035b9f923a1c6bf3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-448023",
	                        "ff1a1ad3010f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (343.936476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-448023 logs -n 25: (1.555247778s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p embed-certs-172173 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p embed-certs-172173                                                                                                                                                                                                                                │ embed-certs-172173           │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p disable-driver-mounts-503313                                                                                                                                                                                                                      │ disable-driver-mounts-503313 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993283 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:45 UTC │
	│ start   │ -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │ 08 Dec 25 01:46 UTC │
	│ image   │ default-k8s-diff-port-993283 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ pause   │ -p default-k8s-diff-port-993283 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ delete  │ -p default-k8s-diff-port-993283                                                                                                                                                                                                                      │ default-k8s-diff-port-993283 │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │ 08 Dec 25 01:46 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:46 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-389831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:49 UTC │                     │
	│ stop    │ -p no-preload-389831 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ addons  │ enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │ 08 Dec 25 01:50 UTC │
	│ start   │ -p no-preload-389831 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-389831            │ jenkins │ v1.37.0 │ 08 Dec 25 01:50 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-448023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:54 UTC │                     │
	│ stop    │ -p newest-cni-448023 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p newest-cni-448023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │ 08 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-448023 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 01:56 UTC │                     │
	│ image   │ newest-cni-448023 image list --format=json                                                                                                                                                                                                           │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	│ pause   │ -p newest-cni-448023 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	│ unpause │ -p newest-cni-448023 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-448023            │ jenkins │ v1.37.0 │ 08 Dec 25 02:02 UTC │ 08 Dec 25 02:02 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:56:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:56:40.995814 1055021 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:56:40.995993 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996024 1055021 out.go:374] Setting ErrFile to fd 2...
	I1208 01:56:40.996044 1055021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:56:40.996297 1055021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:56:40.996698 1055021 out.go:368] Setting JSON to false
	I1208 01:56:40.997651 1055021 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23933,"bootTime":1765135068,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 01:56:40.997760 1055021 start.go:143] virtualization:  
	I1208 01:56:41.000930 1055021 out.go:179] * [newest-cni-448023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:56:41.005767 1055021 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:56:41.005958 1055021 notify.go:221] Checking for updates...
	I1208 01:56:41.009547 1055021 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:56:41.012698 1055021 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:41.016029 1055021 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 01:56:41.019114 1055021 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:56:41.022081 1055021 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:56:41.025425 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:41.026092 1055021 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:56:41.062956 1055021 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:56:41.063137 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.133740 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.124579493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.133841 1055021 docker.go:319] overlay module found
	I1208 01:56:41.136922 1055021 out.go:179] * Using the docker driver based on existing profile
	I1208 01:56:41.139812 1055021 start.go:309] selected driver: docker
	I1208 01:56:41.139836 1055021 start.go:927] validating driver "docker" against &{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.139955 1055021 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:56:41.140671 1055021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:56:41.193763 1055021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:56:41.183682659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:56:41.194162 1055021 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:56:41.194196 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:41.194260 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:41.194313 1055021 start.go:353] cluster config:
	{Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:41.197698 1055021 out.go:179] * Starting "newest-cni-448023" primary control-plane node in "newest-cni-448023" cluster
	I1208 01:56:41.200489 1055021 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 01:56:41.203470 1055021 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:56:41.206341 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:41.206393 1055021 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 01:56:41.206406 1055021 cache.go:65] Caching tarball of preloaded images
	I1208 01:56:41.206414 1055021 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:56:41.206514 1055021 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 01:56:41.206524 1055021 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1208 01:56:41.206659 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.226393 1055021 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:56:41.226417 1055021 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:56:41.226437 1055021 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:56:41.226470 1055021 start.go:360] acquireMachinesLock for newest-cni-448023: {Name:mkd08549e99dd925020de89001c228970b1a4d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:56:41.226539 1055021 start.go:364] duration metric: took 45.818µs to acquireMachinesLock for "newest-cni-448023"
	I1208 01:56:41.226562 1055021 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:56:41.226569 1055021 fix.go:54] fixHost starting: 
	I1208 01:56:41.226872 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.244524 1055021 fix.go:112] recreateIfNeeded on newest-cni-448023: state=Stopped err=<nil>
	W1208 01:56:41.244564 1055021 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:56:42.018560 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:44.518581 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:41.247746 1055021 out.go:252] * Restarting existing docker container for "newest-cni-448023" ...
	I1208 01:56:41.247847 1055021 cli_runner.go:164] Run: docker start newest-cni-448023
	I1208 01:56:41.505835 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:41.523362 1055021 kic.go:430] container "newest-cni-448023" state is running.
	I1208 01:56:41.523773 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:41.545536 1055021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/config.json ...
	I1208 01:56:41.545777 1055021 machine.go:94] provisionDockerMachine start ...
	I1208 01:56:41.545848 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:41.570998 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:41.571328 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:41.571336 1055021 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:56:41.572041 1055021 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:56:44.722629 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.722658 1055021 ubuntu.go:182] provisioning hostname "newest-cni-448023"
	I1208 01:56:44.722733 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.743562 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.743889 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.743906 1055021 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-448023 && echo "newest-cni-448023" | sudo tee /etc/hostname
	I1208 01:56:44.912657 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-448023
	
	I1208 01:56:44.912755 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:44.930550 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:44.930902 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:44.930926 1055021 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-448023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-448023/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-448023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:56:45.125086 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:56:45.125166 1055021 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 01:56:45.125215 1055021 ubuntu.go:190] setting up certificates
	I1208 01:56:45.125242 1055021 provision.go:84] configureAuth start
	I1208 01:56:45.125340 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:45.146934 1055021 provision.go:143] copyHostCerts
	I1208 01:56:45.147071 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 01:56:45.147086 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 01:56:45.147185 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 01:56:45.147315 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 01:56:45.147333 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 01:56:45.147379 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 01:56:45.147450 1055021 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 01:56:45.147463 1055021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 01:56:45.147494 1055021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 01:56:45.147561 1055021 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.newest-cni-448023 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-448023]
	I1208 01:56:45.319641 1055021 provision.go:177] copyRemoteCerts
	I1208 01:56:45.319718 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:56:45.319771 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.338151 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.446957 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:56:45.464534 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:56:45.481634 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:56:45.499110 1055021 provision.go:87] duration metric: took 373.83191ms to configureAuth
	I1208 01:56:45.499137 1055021 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:56:45.499354 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:45.499462 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.519312 1055021 main.go:143] libmachine: Using SSH client type: native
	I1208 01:56:45.520323 1055021 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1208 01:56:45.520348 1055021 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 01:56:45.838649 1055021 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 01:56:45.838675 1055021 machine.go:97] duration metric: took 4.292880237s to provisionDockerMachine
	I1208 01:56:45.838688 1055021 start.go:293] postStartSetup for "newest-cni-448023" (driver="docker")
	I1208 01:56:45.838701 1055021 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:56:45.838764 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:56:45.838808 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:45.856107 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:45.962864 1055021 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:56:45.966280 1055021 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:56:45.966310 1055021 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:56:45.966321 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 01:56:45.966376 1055021 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 01:56:45.966455 1055021 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 01:56:45.966565 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:56:45.973812 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:45.990960 1055021 start.go:296] duration metric: took 152.256258ms for postStartSetup
	I1208 01:56:45.991062 1055021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:56:45.991102 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.010295 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.111994 1055021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:56:46.116921 1055021 fix.go:56] duration metric: took 4.890342951s for fixHost
	I1208 01:56:46.116949 1055021 start.go:83] releasing machines lock for "newest-cni-448023", held for 4.89039814s
	I1208 01:56:46.117023 1055021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-448023
	I1208 01:56:46.133998 1055021 ssh_runner.go:195] Run: cat /version.json
	I1208 01:56:46.134053 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.134086 1055021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:56:46.134143 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:46.155007 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.157578 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:46.259943 1055021 ssh_runner.go:195] Run: systemctl --version
	I1208 01:56:46.363782 1055021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 01:56:46.401418 1055021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:56:46.405895 1055021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:56:46.406027 1055021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:56:46.414120 1055021 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:56:46.414145 1055021 start.go:496] detecting cgroup driver to use...
	I1208 01:56:46.414178 1055021 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:56:46.414240 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 01:56:46.430116 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 01:56:46.443306 1055021 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:56:46.443370 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:56:46.459228 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:56:46.472250 1055021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:56:46.583643 1055021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:56:46.702836 1055021 docker.go:234] disabling docker service ...
	I1208 01:56:46.702974 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:56:46.718081 1055021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:56:46.731165 1055021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:56:46.841278 1055021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:56:46.959396 1055021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:56:46.972986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:56:46.988672 1055021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 01:56:46.988773 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:46.998541 1055021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 01:56:46.998635 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.012333 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.022719 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.033036 1055021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:56:47.042410 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.053356 1055021 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.066055 1055021 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 01:56:47.076106 1055021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:56:47.083610 1055021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:56:47.090937 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.204760 1055021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 01:56:47.377268 1055021 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 01:56:47.377383 1055021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 01:56:47.381048 1055021 start.go:564] Will wait 60s for crictl version
	I1208 01:56:47.381161 1055021 ssh_runner.go:195] Run: which crictl
	I1208 01:56:47.384529 1055021 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:56:47.407415 1055021 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 01:56:47.407590 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.438310 1055021 ssh_runner.go:195] Run: crio --version
	I1208 01:56:47.480028 1055021 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1208 01:56:47.482931 1055021 cli_runner.go:164] Run: docker network inspect newest-cni-448023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:56:47.498300 1055021 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:56:47.502114 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.515024 1055021 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:56:47.517850 1055021 kubeadm.go:884] updating cluster {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:56:47.518007 1055021 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 01:56:47.518083 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.554783 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.554810 1055021 crio.go:433] Images already preloaded, skipping extraction
	I1208 01:56:47.554891 1055021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:56:47.580370 1055021 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 01:56:47.580396 1055021 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:56:47.580404 1055021 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1208 01:56:47.580497 1055021 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-448023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:56:47.580581 1055021 ssh_runner.go:195] Run: crio config
	I1208 01:56:47.630652 1055021 cni.go:84] Creating CNI manager for ""
	I1208 01:56:47.630677 1055021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 01:56:47.630697 1055021 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:56:47.630720 1055021 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-448023 NodeName:newest-cni-448023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:56:47.630943 1055021 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-448023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:56:47.631027 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:56:47.638867 1055021 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:56:47.638960 1055021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:56:47.646535 1055021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1208 01:56:47.659466 1055021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:56:47.672488 1055021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 01:56:47.685612 1055021 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:56:47.689373 1055021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:56:47.699289 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:47.852921 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:47.877101 1055021 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023 for IP: 192.168.85.2
	I1208 01:56:47.877130 1055021 certs.go:195] generating shared ca certs ...
	I1208 01:56:47.877147 1055021 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:47.877305 1055021 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 01:56:47.877358 1055021 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 01:56:47.877370 1055021 certs.go:257] generating profile certs ...
	I1208 01:56:47.877482 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/client.key
	I1208 01:56:47.877551 1055021 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key.4685cb7e
	I1208 01:56:47.877603 1055021 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key
	I1208 01:56:47.877731 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 01:56:47.877771 1055021 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 01:56:47.877792 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 01:56:47.877831 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:56:47.877859 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:56:47.877890 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 01:56:47.877943 1055021 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 01:56:47.879217 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:56:47.903514 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:56:47.922072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:56:47.939555 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:56:47.956891 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:56:47.976072 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:56:47.994485 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:56:48.016256 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/newest-cni-448023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:56:48.036003 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 01:56:48.058425 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 01:56:48.078107 1055021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:56:48.096426 1055021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:56:48.110183 1055021 ssh_runner.go:195] Run: openssl version
	I1208 01:56:48.117292 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.125194 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 01:56:48.133030 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136789 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.136880 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 01:56:48.178238 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:56:48.186394 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.194429 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 01:56:48.203481 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207582 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.207655 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 01:56:48.249053 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:56:48.257115 1055021 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.265010 1055021 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:56:48.272913 1055021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276751 1055021 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.276818 1055021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:56:48.318199 1055021 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:56:48.326277 1055021 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:56:48.330322 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:56:48.371576 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:56:48.412414 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:56:48.454546 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:56:48.499800 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:56:48.544265 1055021 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:56:48.590374 1055021 kubeadm.go:401] StartCluster: {Name:newest-cni-448023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-448023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:56:48.590473 1055021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 01:56:48.590547 1055021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:56:48.619202 1055021 cri.go:89] found id: ""
	I1208 01:56:48.619330 1055021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:56:48.627096 1055021 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:56:48.627120 1055021 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:56:48.627172 1055021 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:56:48.634458 1055021 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:56:48.635058 1055021 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-448023" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.635319 1055021 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-789938/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-448023" cluster setting kubeconfig missing "newest-cni-448023" context setting]
	I1208 01:56:48.635800 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.637157 1055021 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:56:48.644838 1055021 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:56:48.644913 1055021 kubeadm.go:602] duration metric: took 17.785882ms to restartPrimaryControlPlane
	I1208 01:56:48.644930 1055021 kubeadm.go:403] duration metric: took 54.567759ms to StartCluster
	I1208 01:56:48.644947 1055021 settings.go:142] acquiring lock: {Name:mk9ef9dba48eb0e947a74bd01f060992ebc5cd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.645007 1055021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 01:56:48.645870 1055021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/kubeconfig: {Name:mk07c108199016ce18e32ba4f666dffcc83df60d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:56:48.646084 1055021 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 01:56:48.646389 1055021 config.go:182] Loaded profile config "newest-cni-448023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 01:56:48.646439 1055021 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:56:48.646504 1055021 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-448023"
	I1208 01:56:48.646529 1055021 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-448023"
	I1208 01:56:48.646555 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647285 1055021 addons.go:70] Setting dashboard=true in profile "newest-cni-448023"
	I1208 01:56:48.647305 1055021 addons.go:239] Setting addon dashboard=true in "newest-cni-448023"
	W1208 01:56:48.647311 1055021 addons.go:248] addon dashboard should already be in state true
	I1208 01:56:48.647331 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.647734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.647957 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.648448 1055021 addons.go:70] Setting default-storageclass=true in profile "newest-cni-448023"
	I1208 01:56:48.648476 1055021 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-448023"
	I1208 01:56:48.648734 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.651945 1055021 out.go:179] * Verifying Kubernetes components...
	I1208 01:56:48.654867 1055021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:56:48.684864 1055021 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:56:48.691009 1055021 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:56:48.694226 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:56:48.694251 1055021 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:56:48.694323 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.695436 1055021 addons.go:239] Setting addon default-storageclass=true in "newest-cni-448023"
	I1208 01:56:48.695482 1055021 host.go:66] Checking if "newest-cni-448023" exists ...
	I1208 01:56:48.695884 1055021 cli_runner.go:164] Run: docker container inspect newest-cni-448023 --format={{.State.Status}}
	I1208 01:56:48.701699 1055021 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1208 01:56:47.019431 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:49.518464 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:48.704558 1055021 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.704591 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:56:48.704655 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.736846 1055021 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.736869 1055021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:56:48.736936 1055021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-448023
	I1208 01:56:48.742543 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.766983 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.785430 1055021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/newest-cni-448023/id_rsa Username:docker}
	I1208 01:56:48.885046 1055021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:56:48.955470 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:56:48.955498 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:56:48.963459 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:48.965887 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:48.978338 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:56:48.978366 1055021 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:56:49.016188 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:56:49.016210 1055021 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:56:49.061303 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:56:49.061328 1055021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:56:49.074921 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:56:49.074987 1055021 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:56:49.087412 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:56:49.087487 1055021 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:56:49.099641 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:56:49.099667 1055021 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:56:49.112487 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:56:49.112550 1055021 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:56:49.125264 1055021 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.125288 1055021 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:56:49.138335 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:49.508759 1055021 api_server.go:52] waiting for apiserver process to appear ...
	W1208 01:56:49.508918 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509385 1055021 retry.go:31] will retry after 199.05184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509006 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509406 1055021 retry.go:31] will retry after 322.784094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.509263 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509418 1055021 retry.go:31] will retry after 353.691521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.509538 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:49.709327 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:49.771304 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.771383 1055021 retry.go:31] will retry after 463.845922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.832454 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:56:49.863948 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:49.893225 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.893260 1055021 retry.go:31] will retry after 412.627767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:49.933504 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:49.933538 1055021 retry.go:31] will retry after 461.252989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.009945 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.235907 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:50.306466 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:50.322038 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.322071 1055021 retry.go:31] will retry after 523.830022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:50.380008 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.380051 1055021 retry.go:31] will retry after 753.154513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.395255 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:50.456642 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.456676 1055021 retry.go:31] will retry after 803.433098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.509737 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.846838 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:50.908365 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:50.908408 1055021 retry.go:31] will retry after 671.521026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.519391 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:54.018689 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:51.009996 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.134042 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.192423 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.192455 1055021 retry.go:31] will retry after 689.227768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.260665 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:51.319134 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.319182 1055021 retry.go:31] will retry after 541.526321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.509442 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:51.580384 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:51.640452 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.640485 1055021 retry.go:31] will retry after 844.977075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.861863 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:56:51.882351 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:51.944280 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.944321 1055021 retry.go:31] will retry after 1.000499188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:51.967122 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:51.967155 1055021 retry.go:31] will retry after 859.890122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.010305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:52.486447 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:56:52.510056 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:56:52.585753 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.585816 1055021 retry.go:31] will retry after 1.004705222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.828167 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:52.886091 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.886122 1055021 retry.go:31] will retry after 2.82316744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:52.945292 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:53.006627 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.006710 1055021 retry.go:31] will retry after 2.04955933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.009824 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.510073 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.591501 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:53.650678 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:53.650712 1055021 retry.go:31] will retry after 3.502569911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:54.010159 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:54.509667 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.009590 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.057336 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:55.132269 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.132307 1055021 retry.go:31] will retry after 2.513983979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.509439 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:55.710171 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:55.769058 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:55.769091 1055021 retry.go:31] will retry after 2.669645777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:56:56.518414 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	W1208 01:56:58.518521 1047159 node_ready.go:55] error getting node "no-preload-389831" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-389831": dial tcp 192.168.76.2:8443: connect: connection refused
	I1208 01:56:59.018412 1047159 node_ready.go:38] duration metric: took 6m0.000405007s for node "no-preload-389831" to be "Ready" ...
	I1208 01:56:59.026905 1047159 out.go:203] 
	W1208 01:56:59.029838 1047159 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:56:59.029857 1047159 out.go:285] * 
	W1208 01:56:59.032175 1047159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:56:59.035425 1047159 out.go:203] 
	I1208 01:56:56.009694 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.509523 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.010140 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.153585 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:56:57.218181 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.218214 1055021 retry.go:31] will retry after 3.909169329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:57.647096 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:56:57.710136 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:57.710169 1055021 retry.go:31] will retry after 4.894098122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.009665 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:58.439443 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:56:58.505497 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.505529 1055021 retry.go:31] will retry after 6.007342944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:56:58.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.009469 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.510388 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.015300 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:00.509494 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.010257 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:01.128215 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:01.190419 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.190453 1055021 retry.go:31] will retry after 9.504933562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:01.509623 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.009676 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.509462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.605116 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:02.675800 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:02.675835 1055021 retry.go:31] will retry after 6.984717516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:03.009407 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:03.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.015233 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.509531 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:04.514060 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:04.574188 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:04.574220 1055021 retry.go:31] will retry after 6.522846226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:05.012398 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.509759 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.010229 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:06.509419 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.009462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:07.510275 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.010363 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.010036 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.509454 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:09.661163 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:09.722054 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:09.722085 1055021 retry.go:31] will retry after 5.465119302s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.010374 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.510222 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:10.696134 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:10.771084 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:10.771123 1055021 retry.go:31] will retry after 11.695285792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.009829 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.098157 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:11.159270 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.159302 1055021 retry.go:31] will retry after 8.417822009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:11.509651 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.010126 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:12.510304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.009464 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:13.510317 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.009529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.510393 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.009573 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:15.188355 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:57:15.251108 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.251147 1055021 retry.go:31] will retry after 12.201311078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:15.509570 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.009635 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:16.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.009802 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.510253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:18.509509 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.009459 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.509684 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:19.577986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:19.638356 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:19.638389 1055021 retry.go:31] will retry after 8.001395588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:20.012301 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.509725 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.010367 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:21.509456 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.009599 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:22.467388 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:57:22.509783 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:22.532031 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:22.532062 1055021 retry.go:31] will retry after 11.135828112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:23.009468 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.509446 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.009554 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:24.509432 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.010095 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:25.510255 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.012400 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.010403 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:27.452716 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:27.510223 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:27.519149 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.519184 1055021 retry.go:31] will retry after 13.452567778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.640862 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:27.703487 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:27.703522 1055021 retry.go:31] will retry after 26.167048463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:28.009930 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:28.509594 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.009708 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.510396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.009745 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:30.509396 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.010280 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:31.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.010087 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.509477 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.010351 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.509804 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:33.668898 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:33.729185 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:33.729219 1055021 retry.go:31] will retry after 25.894597219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:34.009473 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:34.509532 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.010355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.509445 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.010451 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:36.509505 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.009541 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:37.509700 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.014196 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.509592 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.010217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:39.510250 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.015373 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.510349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:40.972256 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:57:41.009839 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:57:41.066333 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.066366 1055021 retry.go:31] will retry after 34.953666856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:41.509748 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.009596 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:42.509438 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.009956 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:43.510378 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.009680 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.509463 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.012784 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:45.510247 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.010335 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.509529 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.009480 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:47.509657 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.009556 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:48.509689 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.009367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.009459 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.046711 1055021 cri.go:89] found id: ""
	I1208 01:57:49.046741 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.046749 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.046756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.046829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.086414 1055021 cri.go:89] found id: ""
	I1208 01:57:49.086435 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.086443 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.086449 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.086517 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:49.111234 1055021 cri.go:89] found id: ""
	I1208 01:57:49.111256 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.111264 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:49.111270 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:49.111328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:49.135868 1055021 cri.go:89] found id: ""
	I1208 01:57:49.135890 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.135899 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:49.135905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:49.135966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:49.161459 1055021 cri.go:89] found id: ""
	I1208 01:57:49.161482 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.161490 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:49.161496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:49.161557 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:49.186397 1055021 cri.go:89] found id: ""
	I1208 01:57:49.186421 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.186430 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:49.186436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:49.186542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:49.213171 1055021 cri.go:89] found id: ""
	I1208 01:57:49.213192 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.213201 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:49.213207 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:49.213265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:49.239381 1055021 cri.go:89] found id: ""
	I1208 01:57:49.239451 1055021 logs.go:282] 0 containers: []
	W1208 01:57:49.239484 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:49.239500 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:49.239512 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:49.311423 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:49.311459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:49.331846 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:49.331876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:49.396868 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:49.388947    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.389582    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391170    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.391639    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:49.393115    1905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:49.396933 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:49.396954 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:49.425376 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:49.425412 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:51.956807 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:51.967366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:51.967435 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:51.995332 1055021 cri.go:89] found id: ""
	I1208 01:57:51.995356 1055021 logs.go:282] 0 containers: []
	W1208 01:57:51.995364 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:51.995371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:51.995429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.032087 1055021 cri.go:89] found id: ""
	I1208 01:57:52.032112 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.032121 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.032128 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.032190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.066375 1055021 cri.go:89] found id: ""
	I1208 01:57:52.066403 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.066412 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.066420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.066490 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:52.098263 1055021 cri.go:89] found id: ""
	I1208 01:57:52.098291 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.098300 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:52.098306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:52.098376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:52.125642 1055021 cri.go:89] found id: ""
	I1208 01:57:52.125672 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.125681 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:52.125688 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:52.125750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:52.155324 1055021 cri.go:89] found id: ""
	I1208 01:57:52.155348 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.155356 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:52.155363 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:52.155424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:52.180558 1055021 cri.go:89] found id: ""
	I1208 01:57:52.180625 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.180647 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:52.180659 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:52.180742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:52.209892 1055021 cri.go:89] found id: ""
	I1208 01:57:52.209921 1055021 logs.go:282] 0 containers: []
	W1208 01:57:52.209930 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:52.209940 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:52.209951 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:52.237887 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:52.237925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:52.279083 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:52.279113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:52.360508 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:52.360547 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:52.379387 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:52.379417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:52.443498 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:52.435353    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.435979    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.437708    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.438238    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:52.439701    2029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.871074 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:57:53.931966 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:53.931998 1055021 retry.go:31] will retry after 33.054913046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:54.943790 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:54.955406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:54.955477 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:54.980272 1055021 cri.go:89] found id: ""
	I1208 01:57:54.980295 1055021 logs.go:282] 0 containers: []
	W1208 01:57:54.980303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:54.980310 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:54.980377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.016873 1055021 cri.go:89] found id: ""
	I1208 01:57:55.016950 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.016973 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.016992 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.017116 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.055884 1055021 cri.go:89] found id: ""
	I1208 01:57:55.055905 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.055914 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.055920 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.055979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.085540 1055021 cri.go:89] found id: ""
	I1208 01:57:55.085561 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.085569 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.085576 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.085641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.111356 1055021 cri.go:89] found id: ""
	I1208 01:57:55.111378 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.111386 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.111393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.111473 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:55.137620 1055021 cri.go:89] found id: ""
	I1208 01:57:55.137643 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.137651 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:55.137657 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:55.137717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:55.162561 1055021 cri.go:89] found id: ""
	I1208 01:57:55.162626 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.162650 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:55.162667 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:55.162751 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:55.188593 1055021 cri.go:89] found id: ""
	I1208 01:57:55.188658 1055021 logs.go:282] 0 containers: []
	W1208 01:57:55.188683 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:55.188697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:55.188744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:55.254035 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:55.245609    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.246569    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248104    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.248377    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:55.249795    2129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:55.254057 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:55.254081 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:55.286453 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:55.286528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:55.320738 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:55.320762 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.387748 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:55.387783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:57.905905 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:57.918662 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:57.918736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:57.946026 1055021 cri.go:89] found id: ""
	I1208 01:57:57.946049 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.946058 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:57.946065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:57:57.946124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:57.971642 1055021 cri.go:89] found id: ""
	I1208 01:57:57.971669 1055021 logs.go:282] 0 containers: []
	W1208 01:57:57.971678 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:57:57.971685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:57:57.971744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.007407 1055021 cri.go:89] found id: ""
	I1208 01:57:58.007432 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.007441 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.007447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.007523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.050421 1055021 cri.go:89] found id: ""
	I1208 01:57:58.050442 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.050450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.050457 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.050518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.083694 1055021 cri.go:89] found id: ""
	I1208 01:57:58.083719 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.083728 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.083741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.083800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.110828 1055021 cri.go:89] found id: ""
	I1208 01:57:58.110874 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.110882 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.110899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.110974 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.136277 1055021 cri.go:89] found id: ""
	I1208 01:57:58.136302 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.136310 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.136317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.136378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:58.162168 1055021 cri.go:89] found id: ""
	I1208 01:57:58.162234 1055021 logs.go:282] 0 containers: []
	W1208 01:57:58.162258 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:58.162280 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:57:58.162304 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.191089 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:58.191121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:58.262015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:58.262058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:58.282086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:58.282121 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:58.355880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:58.347159    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.347597    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349304    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.349653    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:58.351623    2258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:58.355910 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:57:58.355926 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:57:59.624913 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:57:59.684883 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:57:59.684920 1055021 retry.go:31] will retry after 39.668120724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:00.884752 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:00.909814 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:00.909896 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:00.936313 1055021 cri.go:89] found id: ""
	I1208 01:58:00.936344 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.936353 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:00.936360 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:00.936420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:00.966288 1055021 cri.go:89] found id: ""
	I1208 01:58:00.966355 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.966376 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:00.966394 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:00.966483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:00.992494 1055021 cri.go:89] found id: ""
	I1208 01:58:00.992526 1055021 logs.go:282] 0 containers: []
	W1208 01:58:00.992536 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:00.992543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:00.992608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.026941 1055021 cri.go:89] found id: ""
	I1208 01:58:01.026969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.026979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.026985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.027057 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.058196 1055021 cri.go:89] found id: ""
	I1208 01:58:01.058224 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.058233 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.058239 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.058301 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.086997 1055021 cri.go:89] found id: ""
	I1208 01:58:01.087025 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.087034 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.087042 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.087124 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.113372 1055021 cri.go:89] found id: ""
	I1208 01:58:01.113401 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.113411 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.113417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.113480 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.140687 1055021 cri.go:89] found id: ""
	I1208 01:58:01.140717 1055021 logs.go:282] 0 containers: []
	W1208 01:58:01.140726 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.140736 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.140747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:01.211011 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:01.211061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:01.229916 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:01.229948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:01.319423 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:01.311026    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.311501    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313059    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.313402    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:01.314877    2370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:01.319443 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:01.319455 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:01.349176 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:01.349213 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:03.883281 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:03.894087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:03.894159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:03.919271 1055021 cri.go:89] found id: ""
	I1208 01:58:03.919294 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.919302 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:03.919309 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:03.919367 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:03.944356 1055021 cri.go:89] found id: ""
	I1208 01:58:03.944379 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.944387 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:03.944393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:03.944456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:03.969863 1055021 cri.go:89] found id: ""
	I1208 01:58:03.969890 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.969900 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:03.969907 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:03.969981 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:03.995306 1055021 cri.go:89] found id: ""
	I1208 01:58:03.995328 1055021 logs.go:282] 0 containers: []
	W1208 01:58:03.995336 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:03.995344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:03.995402 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.037050 1055021 cri.go:89] found id: ""
	I1208 01:58:04.037079 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.037089 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.037096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.037159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.081029 1055021 cri.go:89] found id: ""
	I1208 01:58:04.081057 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.081066 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.081073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.081139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.111984 1055021 cri.go:89] found id: ""
	I1208 01:58:04.112005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.112013 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.112020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.112079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.140750 1055021 cri.go:89] found id: ""
	I1208 01:58:04.140776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:04.140784 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.140793 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.140805 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.207146 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.207183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:04.225030 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:04.225061 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:04.295674 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:04.287171    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.288112    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.289897    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.290195    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:04.291767    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:04.295696 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:04.295708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:04.326962 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.327003 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:06.859119 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:06.871159 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:06.871236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:06.901570 1055021 cri.go:89] found id: ""
	I1208 01:58:06.901594 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.901603 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:06.901618 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:06.901681 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:06.930193 1055021 cri.go:89] found id: ""
	I1208 01:58:06.930220 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.930229 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:06.930235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:06.930298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:06.955159 1055021 cri.go:89] found id: ""
	I1208 01:58:06.955188 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.955197 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:06.955205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:06.955278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:06.980007 1055021 cri.go:89] found id: ""
	I1208 01:58:06.980031 1055021 logs.go:282] 0 containers: []
	W1208 01:58:06.980040 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:06.980046 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:06.980103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.017391 1055021 cri.go:89] found id: ""
	I1208 01:58:07.017417 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.017425 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.017432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.017495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.048550 1055021 cri.go:89] found id: ""
	I1208 01:58:07.048577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.048586 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.048596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.048659 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.080691 1055021 cri.go:89] found id: ""
	I1208 01:58:07.080759 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.080783 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.080796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.080874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.105849 1055021 cri.go:89] found id: ""
	I1208 01:58:07.105925 1055021 logs.go:282] 0 containers: []
	W1208 01:58:07.105948 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.105971 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:07.106012 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:07.138653 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.138732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.206905 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.206940 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.224653 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.224683 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.303888 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.295690    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.296494    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298048    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.298339    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.300007    2609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.303912 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:07.303925 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:09.834549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:09.845152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:09.845227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:09.870225 1055021 cri.go:89] found id: ""
	I1208 01:58:09.870251 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.870259 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:09.870268 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:09.870330 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:09.896168 1055021 cri.go:89] found id: ""
	I1208 01:58:09.896191 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.896200 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:09.896206 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:09.896269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:09.922117 1055021 cri.go:89] found id: ""
	I1208 01:58:09.922140 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.922149 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:09.922155 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:09.922215 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:09.947105 1055021 cri.go:89] found id: ""
	I1208 01:58:09.947129 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.947137 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:09.947143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:09.947236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:09.972509 1055021 cri.go:89] found id: ""
	I1208 01:58:09.972535 1055021 logs.go:282] 0 containers: []
	W1208 01:58:09.972544 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:09.972551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:09.972609 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.009065 1055021 cri.go:89] found id: ""
	I1208 01:58:10.009097 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.009107 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.009115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.009196 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.052170 1055021 cri.go:89] found id: ""
	I1208 01:58:10.052197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.052206 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.052212 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.052278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.078447 1055021 cri.go:89] found id: ""
	I1208 01:58:10.078472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:10.078480 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.078489 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:10.078500 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:10.109259 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.109300 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.138226 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.138251 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.204388 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.204424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.222357 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.222398 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.305027 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.289684    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.290128    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299134    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.299510    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.300947    2726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:12.805305 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:12.815949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:12.816024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:12.840507 1055021 cri.go:89] found id: ""
	I1208 01:58:12.840531 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.840540 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:12.840546 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:12.840614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:12.865555 1055021 cri.go:89] found id: ""
	I1208 01:58:12.865580 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.865589 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:12.865595 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:12.865653 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:12.890286 1055021 cri.go:89] found id: ""
	I1208 01:58:12.890311 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.890319 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:12.890325 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:12.890383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:12.915193 1055021 cri.go:89] found id: ""
	I1208 01:58:12.915217 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.915226 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:12.915233 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:12.915291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:12.940889 1055021 cri.go:89] found id: ""
	I1208 01:58:12.940915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.940923 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:12.940931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:12.941011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:12.967233 1055021 cri.go:89] found id: ""
	I1208 01:58:12.967259 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.967268 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:12.967275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:12.967337 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:12.990975 1055021 cri.go:89] found id: ""
	I1208 01:58:12.991001 1055021 logs.go:282] 0 containers: []
	W1208 01:58:12.991009 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:12.991016 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:12.991088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.025590 1055021 cri.go:89] found id: ""
	I1208 01:58:13.025616 1055021 logs.go:282] 0 containers: []
	W1208 01:58:13.025625 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.025634 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.025646 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.063362 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.063391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.134922 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.134959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.153025 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.153060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.215226 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.206650    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.207429    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209190    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.209686    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.211334    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.215246 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:13.215258 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:15.744740 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:15.755312 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:15.755383 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:15.780891 1055021 cri.go:89] found id: ""
	I1208 01:58:15.780915 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.780923 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:15.780930 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:15.780989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:15.806161 1055021 cri.go:89] found id: ""
	I1208 01:58:15.806185 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.806194 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:15.806200 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:15.806257 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:15.831178 1055021 cri.go:89] found id: ""
	I1208 01:58:15.831197 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.831205 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:15.831211 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:15.831269 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:15.856130 1055021 cri.go:89] found id: ""
	I1208 01:58:15.856155 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.856164 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:15.856171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:15.856232 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:15.885064 1055021 cri.go:89] found id: ""
	I1208 01:58:15.885136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.885159 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:15.885177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:15.885270 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:15.912595 1055021 cri.go:89] found id: ""
	I1208 01:58:15.912623 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.912631 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:15.912638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:15.912700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:15.936650 1055021 cri.go:89] found id: ""
	I1208 01:58:15.936677 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.936686 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:15.936692 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:15.936752 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:15.962329 1055021 cri.go:89] found id: ""
	I1208 01:58:15.962350 1055021 logs.go:282] 0 containers: []
	W1208 01:58:15.962358 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:15.962367 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:15.962378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 01:58:16.020986 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:16.067660 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.035539    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.036318    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051153    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.051779    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.055018    2928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.067744 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:16.067772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1208 01:58:16.112099 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.112132 1055021 retry.go:31] will retry after 29.72360839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:58:16.126560 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.126615 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.157854 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.157883 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.223999 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.224035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:18.742355 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:18.752998 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:18.753077 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:18.778077 1055021 cri.go:89] found id: ""
	I1208 01:58:18.778099 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.778107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:18.778114 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:18.778171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:18.802643 1055021 cri.go:89] found id: ""
	I1208 01:58:18.802665 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.802673 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:18.802679 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:18.802736 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:18.827413 1055021 cri.go:89] found id: ""
	I1208 01:58:18.827441 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.827450 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:18.827456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:18.827514 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:18.852593 1055021 cri.go:89] found id: ""
	I1208 01:58:18.852618 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.852627 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:18.852634 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:18.852694 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:18.877850 1055021 cri.go:89] found id: ""
	I1208 01:58:18.877876 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.877884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:18.877891 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:18.877949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:18.906907 1055021 cri.go:89] found id: ""
	I1208 01:58:18.906930 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.906938 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:18.906945 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:18.907007 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:18.932699 1055021 cri.go:89] found id: ""
	I1208 01:58:18.932723 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.932733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:18.932739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:18.932802 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:18.958426 1055021 cri.go:89] found id: ""
	I1208 01:58:18.958448 1055021 logs.go:282] 0 containers: []
	W1208 01:58:18.958456 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:18.958465 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:18.958476 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.023824 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.023904 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.043811 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.043946 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.116236 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.108500    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.109060    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.110542    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.111066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.112066    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.116259 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:19.116273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:19.145950 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.145986 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:21.678015 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:21.689017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:21.689107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:21.714453 1055021 cri.go:89] found id: ""
	I1208 01:58:21.714513 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.714522 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:21.714529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:21.714590 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:21.738662 1055021 cri.go:89] found id: ""
	I1208 01:58:21.738688 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.738697 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:21.738703 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:21.738765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:21.763648 1055021 cri.go:89] found id: ""
	I1208 01:58:21.763684 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.763693 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:21.763700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:21.763768 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:21.789120 1055021 cri.go:89] found id: ""
	I1208 01:58:21.789142 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.789150 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:21.789156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:21.789212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:21.814445 1055021 cri.go:89] found id: ""
	I1208 01:58:21.814466 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.814474 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:21.814480 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:21.814538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:21.843027 1055021 cri.go:89] found id: ""
	I1208 01:58:21.843061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.843070 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:21.843078 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:21.843139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:21.872604 1055021 cri.go:89] found id: ""
	I1208 01:58:21.872632 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.872640 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:21.872647 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:21.872725 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:21.898190 1055021 cri.go:89] found id: ""
	I1208 01:58:21.898225 1055021 logs.go:282] 0 containers: []
	W1208 01:58:21.898233 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:21.898258 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:21.898274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:21.963735 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:21.963774 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:21.981549 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:21.981580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.065337 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.056290    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.057401    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059215    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.059536    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.060962    3168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.065359 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:22.065373 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:22.096383 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.096419 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:24.626630 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:24.637406 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:24.637484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:24.662982 1055021 cri.go:89] found id: ""
	I1208 01:58:24.663005 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.663014 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:24.663020 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:24.663088 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:24.687863 1055021 cri.go:89] found id: ""
	I1208 01:58:24.687887 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.687897 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:24.687904 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:24.687965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:24.713087 1055021 cri.go:89] found id: ""
	I1208 01:58:24.713110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.713119 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:24.713125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:24.713185 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:24.738346 1055021 cri.go:89] found id: ""
	I1208 01:58:24.738369 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.738378 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:24.738385 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:24.738451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:24.764281 1055021 cri.go:89] found id: ""
	I1208 01:58:24.764309 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.764317 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:24.764323 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:24.764382 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:24.788244 1055021 cri.go:89] found id: ""
	I1208 01:58:24.788267 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.788276 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:24.788282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:24.788358 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:24.812521 1055021 cri.go:89] found id: ""
	I1208 01:58:24.812544 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.812553 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:24.812559 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:24.812620 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:24.837747 1055021 cri.go:89] found id: ""
	I1208 01:58:24.837772 1055021 logs.go:282] 0 containers: []
	W1208 01:58:24.837781 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:24.837790 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:24.837804 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:24.903152 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:24.903189 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:24.920792 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:24.920824 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:24.987709 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:24.979694    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.980251    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.981800    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.982264    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:24.983797    3286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:24.987780 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:24.987806 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:25.019693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.019773 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:26.987306 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:58:27.057603 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:27.057721 1055021 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:27.560847 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:27.570936 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:27.571004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:27.595473 1055021 cri.go:89] found id: ""
	I1208 01:58:27.595497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.595505 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:27.595512 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:27.595577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:27.620674 1055021 cri.go:89] found id: ""
	I1208 01:58:27.620696 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.620704 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:27.620710 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:27.620766 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:27.646168 1055021 cri.go:89] found id: ""
	I1208 01:58:27.646192 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.646202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:27.646208 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:27.646283 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:27.671472 1055021 cri.go:89] found id: ""
	I1208 01:58:27.671549 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.671564 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:27.671572 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:27.671632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:27.699385 1055021 cri.go:89] found id: ""
	I1208 01:58:27.699409 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.699417 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:27.699423 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:27.699492 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:27.726912 1055021 cri.go:89] found id: ""
	I1208 01:58:27.726937 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.726946 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:27.726953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:27.727011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:27.752037 1055021 cri.go:89] found id: ""
	I1208 01:58:27.752061 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.752070 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:27.752076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:27.752139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:27.777018 1055021 cri.go:89] found id: ""
	I1208 01:58:27.777081 1055021 logs.go:282] 0 containers: []
	W1208 01:58:27.777097 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:27.777106 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:27.777119 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:27.845091 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:27.837154    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.837853    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839520    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.839992    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:27.841140    3398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:27.845115 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:27.845129 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:27.873750 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:27.873794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:27.906540 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:27.906569 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:27.986314 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:27.986360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.504860 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:30.520332 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:30.520426 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:30.558545 1055021 cri.go:89] found id: ""
	I1208 01:58:30.558574 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.558589 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:30.558596 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:30.558670 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:30.587958 1055021 cri.go:89] found id: ""
	I1208 01:58:30.587979 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.587988 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:30.587994 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:30.588055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:30.613947 1055021 cri.go:89] found id: ""
	I1208 01:58:30.613969 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.613977 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:30.613983 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:30.614048 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:30.639872 1055021 cri.go:89] found id: ""
	I1208 01:58:30.639899 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.639908 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:30.639916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:30.639975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:30.664766 1055021 cri.go:89] found id: ""
	I1208 01:58:30.664789 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.664797 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:30.664804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:30.664862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:30.694045 1055021 cri.go:89] found id: ""
	I1208 01:58:30.694110 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.694130 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:30.694149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:30.694238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:30.719821 1055021 cri.go:89] found id: ""
	I1208 01:58:30.719843 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.719851 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:30.719857 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:30.719915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:30.745151 1055021 cri.go:89] found id: ""
	I1208 01:58:30.745176 1055021 logs.go:282] 0 containers: []
	W1208 01:58:30.745185 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:30.745194 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:30.745206 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:30.808884 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:30.808918 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:30.826624 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:30.826650 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:30.895279 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:30.886147    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.886660    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.888684    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.889150    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:30.890863    3515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:30.895304 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:30.895317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:30.927429 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:30.927478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:33.458304 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:33.468970 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:33.469040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:33.493566 1055021 cri.go:89] found id: ""
	I1208 01:58:33.493592 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.493601 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:33.493608 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:33.493669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:33.526608 1055021 cri.go:89] found id: ""
	I1208 01:58:33.526630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.526638 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:33.526644 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:33.526705 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:33.560265 1055021 cri.go:89] found id: ""
	I1208 01:58:33.560287 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.560295 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:33.560301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:33.560376 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:33.588803 1055021 cri.go:89] found id: ""
	I1208 01:58:33.588830 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.588839 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:33.588846 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:33.588908 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:33.614585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.614610 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.614619 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:33.614625 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:33.614684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:33.638894 1055021 cri.go:89] found id: ""
	I1208 01:58:33.638917 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.638926 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:33.638933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:33.638991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:33.664714 1055021 cri.go:89] found id: ""
	I1208 01:58:33.664736 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.664744 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:33.664752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:33.664814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:33.689585 1055021 cri.go:89] found id: ""
	I1208 01:58:33.689611 1055021 logs.go:282] 0 containers: []
	W1208 01:58:33.689620 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:33.689629 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:33.689641 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:33.753906 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:33.753942 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:33.771754 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:33.771783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:33.841023 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:33.832800    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.833663    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835371    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.835693    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:33.837198    3622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:33.841047 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:33.841060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:33.868853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:33.868891 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.397728 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.410372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.410443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:36.441015 1055021 cri.go:89] found id: ""
	I1208 01:58:36.441041 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.441049 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:36.441055 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:36.441117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:36.466353 1055021 cri.go:89] found id: ""
	I1208 01:58:36.466386 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.466395 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:36.466401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:36.466463 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:36.491643 1055021 cri.go:89] found id: ""
	I1208 01:58:36.491670 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.491679 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:36.491685 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:36.491743 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:36.531444 1055021 cri.go:89] found id: ""
	I1208 01:58:36.531472 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.531480 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:36.531487 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:36.531551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:36.561863 1055021 cri.go:89] found id: ""
	I1208 01:58:36.561891 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.561900 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:36.561906 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:36.561965 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:36.598817 1055021 cri.go:89] found id: ""
	I1208 01:58:36.598868 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.598877 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:36.598884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:36.598953 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:36.625352 1055021 cri.go:89] found id: ""
	I1208 01:58:36.625392 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.625402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:36.625408 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:36.625478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:36.649929 1055021 cri.go:89] found id: ""
	I1208 01:58:36.649961 1055021 logs.go:282] 0 containers: []
	W1208 01:58:36.649969 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:36.649979 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:36.649991 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:36.717242 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:36.708318    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.709177    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.710899    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.711330    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:36.712826    3729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:36.717272 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:36.717284 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:36.745340 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:36.745375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.772396 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:36.772423 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:36.840336 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:36.840375 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.353819 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:58:39.359310 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:58:39.415165 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:39.415265 1055021 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:39.415318 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.415380 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.440780 1055021 cri.go:89] found id: ""
	I1208 01:58:39.440802 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.440817 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.440824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.440883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:39.469267 1055021 cri.go:89] found id: ""
	I1208 01:58:39.469293 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.469302 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:39.469308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:39.469369 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:39.497131 1055021 cri.go:89] found id: ""
	I1208 01:58:39.497154 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.497162 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:39.497171 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:39.497229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:39.533641 1055021 cri.go:89] found id: ""
	I1208 01:58:39.533666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.533675 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:39.533683 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:39.533741 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:39.569861 1055021 cri.go:89] found id: ""
	I1208 01:58:39.569884 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.569893 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:39.569900 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:39.569959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:39.598670 1055021 cri.go:89] found id: ""
	I1208 01:58:39.598694 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.598702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:39.598709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:39.598770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:39.623360 1055021 cri.go:89] found id: ""
	I1208 01:58:39.623384 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.623392 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:39.623398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:39.623464 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:39.647840 1055021 cri.go:89] found id: ""
	I1208 01:58:39.647864 1055021 logs.go:282] 0 containers: []
	W1208 01:58:39.647873 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:39.647881 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:39.647893 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:39.711466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:39.711505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:39.728921 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:39.728950 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:39.792077 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:39.784047    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.784646    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786248    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.786759    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:39.788290    3849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:39.792097 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:39.792111 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:39.819026 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:39.819064 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.348228 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.359751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.359835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.385781 1055021 cri.go:89] found id: ""
	I1208 01:58:42.385808 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.385818 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.385824 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.385884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:42.412513 1055021 cri.go:89] found id: ""
	I1208 01:58:42.412540 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.412555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:42.412562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:42.412621 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:42.439136 1055021 cri.go:89] found id: ""
	I1208 01:58:42.439202 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.439217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:42.439223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:42.439297 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:42.468994 1055021 cri.go:89] found id: ""
	I1208 01:58:42.469069 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.469092 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:42.469105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:42.469190 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:42.493446 1055021 cri.go:89] found id: ""
	I1208 01:58:42.493481 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.493489 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:42.493496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:42.493573 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:42.535705 1055021 cri.go:89] found id: ""
	I1208 01:58:42.535751 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.535760 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:42.535768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:42.535838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:42.565148 1055021 cri.go:89] found id: ""
	I1208 01:58:42.565174 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.565183 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:42.565189 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:42.565262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:42.592944 1055021 cri.go:89] found id: ""
	I1208 01:58:42.592967 1055021 logs.go:282] 0 containers: []
	W1208 01:58:42.592975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:42.592984 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:42.592995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:42.627360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:42.627389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:42.692577 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:42.692611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:42.710349 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:42.710378 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:42.782051 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:42.773850    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.774769    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.775843    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.776531    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:42.778230    3974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.782073 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:42.782085 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.310746 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.328999 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.329226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.355526 1055021 cri.go:89] found id: ""
	I1208 01:58:45.355554 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.355562 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.355569 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.355649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.385050 1055021 cri.go:89] found id: ""
	I1208 01:58:45.385073 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.385081 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.385087 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.385146 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.409413 1055021 cri.go:89] found id: ""
	I1208 01:58:45.409438 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.409447 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.409452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.409510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:45.445870 1055021 cri.go:89] found id: ""
	I1208 01:58:45.445903 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.445912 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:45.445919 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:45.445988 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:45.473347 1055021 cri.go:89] found id: ""
	I1208 01:58:45.473382 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.473391 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:45.473397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:45.473465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:45.497721 1055021 cri.go:89] found id: ""
	I1208 01:58:45.497756 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.497765 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:45.497772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:45.497839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:45.529708 1055021 cri.go:89] found id: ""
	I1208 01:58:45.529739 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.529748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:45.529754 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:45.529829 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:45.556748 1055021 cri.go:89] found id: ""
	I1208 01:58:45.556783 1055021 logs.go:282] 0 containers: []
	W1208 01:58:45.556792 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:45.556801 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:45.556812 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:45.623617 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:45.623652 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:45.642117 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:45.642151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:45.711093 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:45.703278    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.703733    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705280    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.705640    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:45.707204    4076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:45.711114 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:45.711127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:45.739133 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:45.739169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.836195 1055021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:58:45.896793 1055021 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:58:45.896954 1055021 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:58:45.900444 1055021 out.go:179] * Enabled addons: 
	I1208 01:58:45.903391 1055021 addons.go:530] duration metric: took 1m57.256950319s for enable addons: enabled=[]
	I1208 01:58:48.271013 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.282344 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.282467 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.314973 1055021 cri.go:89] found id: ""
	I1208 01:58:48.315046 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.315078 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.315098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.315204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.344987 1055021 cri.go:89] found id: ""
	I1208 01:58:48.345017 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.345026 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.345033 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.345094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.370650 1055021 cri.go:89] found id: ""
	I1208 01:58:48.370674 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.370681 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.370687 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.370749 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.396253 1055021 cri.go:89] found id: ""
	I1208 01:58:48.396319 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.396334 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.396341 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.396410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.425208 1055021 cri.go:89] found id: ""
	I1208 01:58:48.425235 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.425244 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.425250 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.425312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:48.455125 1055021 cri.go:89] found id: ""
	I1208 01:58:48.455150 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.455160 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:48.455177 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:48.455238 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:48.479964 1055021 cri.go:89] found id: ""
	I1208 01:58:48.480043 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.480059 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:48.480067 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:48.480128 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:48.506875 1055021 cri.go:89] found id: ""
	I1208 01:58:48.506902 1055021 logs.go:282] 0 containers: []
	W1208 01:58:48.506911 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:48.506920 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:48.506933 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:48.581685 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:48.581724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:48.600281 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:48.600313 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:48.663184 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:48.655740    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.656117    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657556    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.657848    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:48.659265    4196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:48.663203 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:48.663217 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:48.691509 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:48.691549 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.221462 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.231909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.231985 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.262905 1055021 cri.go:89] found id: ""
	I1208 01:58:51.262932 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.262940 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.262946 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.263006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.293540 1055021 cri.go:89] found id: ""
	I1208 01:58:51.293567 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.293576 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.293582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.293639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.324201 1055021 cri.go:89] found id: ""
	I1208 01:58:51.324228 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.324236 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.324242 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.324298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.351933 1055021 cri.go:89] found id: ""
	I1208 01:58:51.351960 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.351974 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.351981 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.352040 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.376814 1055021 cri.go:89] found id: ""
	I1208 01:58:51.376836 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.376845 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.376851 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.376909 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.401752 1055021 cri.go:89] found id: ""
	I1208 01:58:51.401776 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.401785 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.401791 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.401848 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.430825 1055021 cri.go:89] found id: ""
	I1208 01:58:51.430861 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.430870 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.430876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.430938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.455641 1055021 cri.go:89] found id: ""
	I1208 01:58:51.455666 1055021 logs.go:282] 0 containers: []
	W1208 01:58:51.455674 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.455684 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.455695 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:51.527696 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:51.516769    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.518139    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521321    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.521687    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:51.523661    4301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:51.527719 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:51.527732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:51.557037 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:51.557072 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:51.589759 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:51.589789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:51.655851 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.655888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:54.174903 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.185290 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.185363 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.213134 1055021 cri.go:89] found id: ""
	I1208 01:58:54.213158 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.213167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.213174 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.213234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.238420 1055021 cri.go:89] found id: ""
	I1208 01:58:54.238446 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.238455 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.238461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.238524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.272304 1055021 cri.go:89] found id: ""
	I1208 01:58:54.272331 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.272339 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.272345 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.272405 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.302582 1055021 cri.go:89] found id: ""
	I1208 01:58:54.302608 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.302617 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.302623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.302683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.331550 1055021 cri.go:89] found id: ""
	I1208 01:58:54.331577 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.331585 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.331591 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.331656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.356262 1055021 cri.go:89] found id: ""
	I1208 01:58:54.356285 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.356293 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.356300 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.356364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.382019 1055021 cri.go:89] found id: ""
	I1208 01:58:54.382045 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.382054 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.382060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.382120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.407111 1055021 cri.go:89] found id: ""
	I1208 01:58:54.407136 1055021 logs.go:282] 0 containers: []
	W1208 01:58:54.407145 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.407154 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.407169 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.470487 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.462399    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.462904    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464622    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.464978    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.466478    4415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.470509 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:54.470522 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:54.498660 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:54.498697 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:54.539432 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:54.539462 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.617690 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:54.617725 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.135616 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.145801 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.145871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.170603 1055021 cri.go:89] found id: ""
	I1208 01:58:57.170629 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.170637 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.170643 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.170701 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.197272 1055021 cri.go:89] found id: ""
	I1208 01:58:57.197300 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.197309 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.197315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.197379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.226393 1055021 cri.go:89] found id: ""
	I1208 01:58:57.226420 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.226430 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.226436 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.226499 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.267139 1055021 cri.go:89] found id: ""
	I1208 01:58:57.267215 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.267239 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.267257 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.267350 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.302475 1055021 cri.go:89] found id: ""
	I1208 01:58:57.302497 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.302505 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.302511 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.302571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.335859 1055021 cri.go:89] found id: ""
	I1208 01:58:57.335886 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.335894 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.335901 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.335959 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.360608 1055021 cri.go:89] found id: ""
	I1208 01:58:57.360630 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.360639 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.360646 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.360706 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.386045 1055021 cri.go:89] found id: ""
	I1208 01:58:57.386067 1055021 logs.go:282] 0 containers: []
	W1208 01:58:57.386076 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.386084 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.386096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:57.454478 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.454515 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.472469 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.472503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.545965 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.535837    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.537764    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539593    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.539902    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.541322    4536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.545998 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:58:57.546011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:58:57.584922 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.584959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:00.114637 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.175958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.176042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.249754 1055021 cri.go:89] found id: ""
	I1208 01:59:00.249778 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.249788 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.249795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.249868 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.304452 1055021 cri.go:89] found id: ""
	I1208 01:59:00.304487 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.304497 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.304503 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.304576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.346364 1055021 cri.go:89] found id: ""
	I1208 01:59:00.346424 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.346434 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.346465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.346577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.377822 1055021 cri.go:89] found id: ""
	I1208 01:59:00.377852 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.377862 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.377868 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.377963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.406823 1055021 cri.go:89] found id: ""
	I1208 01:59:00.406875 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.406884 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.406908 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.406992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.435875 1055021 cri.go:89] found id: ""
	I1208 01:59:00.435911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.435920 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.435942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.436025 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.463084 1055021 cri.go:89] found id: ""
	I1208 01:59:00.463117 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.463126 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.463135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.463243 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.489555 1055021 cri.go:89] found id: ""
	I1208 01:59:00.489589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:00.489598 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.489626 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.489645 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.562522 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.562560 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.582358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.582389 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.649877 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.641219    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.641935    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643483    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.643812    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.645329    4651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.649899 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:00.649912 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:00.682085 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.682120 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.216065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.226430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.226503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.253068 1055021 cri.go:89] found id: ""
	I1208 01:59:03.253093 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.253102 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.253109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.253168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.282867 1055021 cri.go:89] found id: ""
	I1208 01:59:03.282894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.282903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.282910 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.282969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.320054 1055021 cri.go:89] found id: ""
	I1208 01:59:03.320080 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.320092 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.320098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.320155 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.347220 1055021 cri.go:89] found id: ""
	I1208 01:59:03.347243 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.347252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.347258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.347319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.373498 1055021 cri.go:89] found id: ""
	I1208 01:59:03.373570 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.373595 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.373613 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.373703 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.399912 1055021 cri.go:89] found id: ""
	I1208 01:59:03.399948 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.399957 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.399964 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.400023 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.425601 1055021 cri.go:89] found id: ""
	I1208 01:59:03.425625 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.425634 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.425640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.425698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.454732 1055021 cri.go:89] found id: ""
	I1208 01:59:03.454758 1055021 logs.go:282] 0 containers: []
	W1208 01:59:03.454767 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.454775 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.454789 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.530461 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.530493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.549828 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.549917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.620701 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.611984    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.612797    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.613945    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.614499    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.616300    4764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.620720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:03.620735 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:03.649018 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.649058 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.177524 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.187461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.187531 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.214977 1055021 cri.go:89] found id: ""
	I1208 01:59:06.214999 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.215008 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.215015 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.215094 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.238383 1055021 cri.go:89] found id: ""
	I1208 01:59:06.238493 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.238514 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.238534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.238619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.272265 1055021 cri.go:89] found id: ""
	I1208 01:59:06.272329 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.272351 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.272367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.272453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.302615 1055021 cri.go:89] found id: ""
	I1208 01:59:06.302658 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.302672 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.302678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.302750 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.331427 1055021 cri.go:89] found id: ""
	I1208 01:59:06.331491 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.331512 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.331534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.331619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.356630 1055021 cri.go:89] found id: ""
	I1208 01:59:06.356711 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.356726 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.356734 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.356792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.382232 1055021 cri.go:89] found id: ""
	I1208 01:59:06.382265 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.382273 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.382279 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.382345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.409564 1055021 cri.go:89] found id: ""
	I1208 01:59:06.409598 1055021 logs.go:282] 0 containers: []
	W1208 01:59:06.409607 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.409616 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.409629 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.474483 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.474521 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:06.492236 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.492265 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.581040 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.572371    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.572811    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574498    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.574975    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.576590    4874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.581061 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:06.581074 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:06.609481 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.609528 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:09.142358 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.152558 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.152645 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.176404 1055021 cri.go:89] found id: ""
	I1208 01:59:09.176469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.176483 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.176494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.176555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.200664 1055021 cri.go:89] found id: ""
	I1208 01:59:09.200687 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.200696 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.200702 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.200759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.227242 1055021 cri.go:89] found id: ""
	I1208 01:59:09.227266 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.227274 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.227280 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.227339 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.251746 1055021 cri.go:89] found id: ""
	I1208 01:59:09.251777 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.251786 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.251792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.251859 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.285331 1055021 cri.go:89] found id: ""
	I1208 01:59:09.285356 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.285365 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.285371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.285438 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.323377 1055021 cri.go:89] found id: ""
	I1208 01:59:09.323403 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.323411 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.323418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.323479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.348974 1055021 cri.go:89] found id: ""
	I1208 01:59:09.349042 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.349058 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.349065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.349127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.378922 1055021 cri.go:89] found id: ""
	I1208 01:59:09.378954 1055021 logs.go:282] 0 containers: []
	W1208 01:59:09.378962 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.378972 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.378983 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.444646 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.444685 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.462014 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.462050 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.537469 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.528816    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.529544    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531275    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.531821    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.533447    4981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.537502 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:09.537514 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:09.568427 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.568465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.103793 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.114409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.114485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.143200 1055021 cri.go:89] found id: ""
	I1208 01:59:12.143235 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.143245 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.143251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.143323 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.171946 1055021 cri.go:89] found id: ""
	I1208 01:59:12.171971 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.171979 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.171985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.172050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.196625 1055021 cri.go:89] found id: ""
	I1208 01:59:12.196651 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.196661 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.196669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.196775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.223108 1055021 cri.go:89] found id: ""
	I1208 01:59:12.223178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.223203 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.223221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.223315 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.253115 1055021 cri.go:89] found id: ""
	I1208 01:59:12.253141 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.253155 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.253173 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.253271 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.293405 1055021 cri.go:89] found id: ""
	I1208 01:59:12.293429 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.293438 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.293444 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.293512 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.323970 1055021 cri.go:89] found id: ""
	I1208 01:59:12.324002 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.324011 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.324017 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.324087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.355979 1055021 cri.go:89] found id: ""
	I1208 01:59:12.356005 1055021 logs.go:282] 0 containers: []
	W1208 01:59:12.356013 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.356023 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.356035 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.421458 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.421496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.440234 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.440269 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.509186 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.497972    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.498450    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.500774    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.501510    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.503333    5095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.509214 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:12.509226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:12.541753 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.541790 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.078928 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.091792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.091882 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.118461 1055021 cri.go:89] found id: ""
	I1208 01:59:15.118482 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.118490 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.118496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.118561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.143588 1055021 cri.go:89] found id: ""
	I1208 01:59:15.143612 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.143621 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.143627 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.143687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.174121 1055021 cri.go:89] found id: ""
	I1208 01:59:15.174149 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.174158 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.174164 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.174281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.202466 1055021 cri.go:89] found id: ""
	I1208 01:59:15.202489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.202498 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.202504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.202563 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.229640 1055021 cri.go:89] found id: ""
	I1208 01:59:15.229663 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.229672 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.229678 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.229737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.259982 1055021 cri.go:89] found id: ""
	I1208 01:59:15.260013 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.260021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.260027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.260085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.299510 1055021 cri.go:89] found id: ""
	I1208 01:59:15.299535 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.299544 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.299551 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.299639 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.327621 1055021 cri.go:89] found id: ""
	I1208 01:59:15.327655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:15.327664 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.327673 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.327684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:15.394588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.394632 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.412251 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.412283 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.478739 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.470070    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.470945    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.472680    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.473007    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.474524    5208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.478760 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:15.478772 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:15.507201 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.507279 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.049265 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.060577 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.060652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.087023 1055021 cri.go:89] found id: ""
	I1208 01:59:18.087050 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.087066 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.087073 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.087132 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.115800 1055021 cri.go:89] found id: ""
	I1208 01:59:18.115826 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.115835 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.115841 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.115901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.145764 1055021 cri.go:89] found id: ""
	I1208 01:59:18.145787 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.145797 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.145803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.145862 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.174947 1055021 cri.go:89] found id: ""
	I1208 01:59:18.174974 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.174983 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.174990 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.175050 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.200824 1055021 cri.go:89] found id: ""
	I1208 01:59:18.200847 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.200857 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.200863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.200935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.229145 1055021 cri.go:89] found id: ""
	I1208 01:59:18.229168 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.229176 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.229185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.229246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.266059 1055021 cri.go:89] found id: ""
	I1208 01:59:18.266083 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.266092 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.266098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.266159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.293538 1055021 cri.go:89] found id: ""
	I1208 01:59:18.293605 1055021 logs.go:282] 0 containers: []
	W1208 01:59:18.293630 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.293657 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.293682 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.366543 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.366580 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.387334 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.387367 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.457441 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.449063    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.449741    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451394    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.451892    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.453442    5323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:18.457480 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:18.457496 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:18.486126 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.486159 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:21.020889 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.031877 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.031948 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.061454 1055021 cri.go:89] found id: ""
	I1208 01:59:21.061480 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.061489 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.061496 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.061561 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.086273 1055021 cri.go:89] found id: ""
	I1208 01:59:21.086300 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.086308 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.086315 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.086373 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.112614 1055021 cri.go:89] found id: ""
	I1208 01:59:21.112637 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.112646 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.112652 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.112710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.142489 1055021 cri.go:89] found id: ""
	I1208 01:59:21.142511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.142521 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.142527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.142584 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.167579 1055021 cri.go:89] found id: ""
	I1208 01:59:21.167602 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.167618 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.167624 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.167683 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.192114 1055021 cri.go:89] found id: ""
	I1208 01:59:21.192178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.192194 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.192202 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.192266 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.216638 1055021 cri.go:89] found id: ""
	I1208 01:59:21.216660 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.216669 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.216681 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.216739 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.241924 1055021 cri.go:89] found id: ""
	I1208 01:59:21.241956 1055021 logs.go:282] 0 containers: []
	W1208 01:59:21.241965 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.241989 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.242005 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.320443 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.320516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.339967 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.340098 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.405503 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.397000    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.397558    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399320    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.399881    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.401425    5434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.405526 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:21.405540 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:21.433479 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.433513 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:23.960720 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:23.971271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:23.971346 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:23.996003 1055021 cri.go:89] found id: ""
	I1208 01:59:23.996028 1055021 logs.go:282] 0 containers: []
	W1208 01:59:23.996037 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:23.996044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:23.996111 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.024119 1055021 cri.go:89] found id: ""
	I1208 01:59:24.024146 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.024154 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.024160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.024239 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.051095 1055021 cri.go:89] found id: ""
	I1208 01:59:24.051179 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.051202 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.051217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.051298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.076451 1055021 cri.go:89] found id: ""
	I1208 01:59:24.076477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.076486 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.076493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.076577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.105499 1055021 cri.go:89] found id: ""
	I1208 01:59:24.105527 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.105537 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.105543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.105656 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.136713 1055021 cri.go:89] found id: ""
	I1208 01:59:24.136736 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.136744 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.136751 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.136836 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.165410 1055021 cri.go:89] found id: ""
	I1208 01:59:24.165442 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.165453 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.165460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.165541 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.194981 1055021 cri.go:89] found id: ""
	I1208 01:59:24.195018 1055021 logs.go:282] 0 containers: []
	W1208 01:59:24.195028 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.195037 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.195049 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.260506 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.260541 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.281317 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.281351 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.350532 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.342949    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.343351    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.344919    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.345215    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.346724    5545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.350562 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:24.350574 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:24.378730 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.378760 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.906964 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.918049 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.918151 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:26.944808 1055021 cri.go:89] found id: ""
	I1208 01:59:26.944832 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.944840 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:26.944863 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:26.944936 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:26.969519 1055021 cri.go:89] found id: ""
	I1208 01:59:26.969552 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.969561 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:26.969583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:26.969664 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:26.997687 1055021 cri.go:89] found id: ""
	I1208 01:59:26.997721 1055021 logs.go:282] 0 containers: []
	W1208 01:59:26.997730 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:26.997736 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:26.997835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.029005 1055021 cri.go:89] found id: ""
	I1208 01:59:27.029029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.029037 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.029044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.029121 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.052964 1055021 cri.go:89] found id: ""
	I1208 01:59:27.052989 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.053006 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.053027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.053114 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.081309 1055021 cri.go:89] found id: ""
	I1208 01:59:27.081342 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.081352 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.081375 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.081454 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.105197 1055021 cri.go:89] found id: ""
	I1208 01:59:27.105230 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.105239 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.105245 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.105311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.129963 1055021 cri.go:89] found id: ""
	I1208 01:59:27.129994 1055021 logs.go:282] 0 containers: []
	W1208 01:59:27.130003 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.130012 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:27.130023 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:27.157821 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.157853 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:27.187177 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.187201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.257425 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.257459 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.284073 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.284112 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.365290 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.357295    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357939    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359497    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360062    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.361335    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:29.866080 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.876623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.876700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.905223 1055021 cri.go:89] found id: ""
	I1208 01:59:29.905247 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.905257 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.905264 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.905328 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:29.935886 1055021 cri.go:89] found id: ""
	I1208 01:59:29.935911 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.935920 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:29.935928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:29.935989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:29.961459 1055021 cri.go:89] found id: ""
	I1208 01:59:29.961489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.961499 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:29.961521 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:29.961588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:29.989601 1055021 cri.go:89] found id: ""
	I1208 01:59:29.989666 1055021 logs.go:282] 0 containers: []
	W1208 01:59:29.989691 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:29.989709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:29.989794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.034678 1055021 cri.go:89] found id: ""
	I1208 01:59:30.034757 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.034783 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.034802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.034922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.068355 1055021 cri.go:89] found id: ""
	I1208 01:59:30.068380 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.068388 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.068395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.068456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.095676 1055021 cri.go:89] found id: ""
	I1208 01:59:30.095706 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.095717 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.095723 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.095801 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.122432 1055021 cri.go:89] found id: ""
	I1208 01:59:30.122469 1055021 logs.go:282] 0 containers: []
	W1208 01:59:30.122479 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.122504 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.122543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.191149 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.181728    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.182497    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.183663    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.185488    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.186087    5764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:30.191170 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:30.191183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:30.220413 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.220447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.258205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.258234 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.330424 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.330461 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:32.850065 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.861143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.861227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.885421 1055021 cri.go:89] found id: ""
	I1208 01:59:32.885447 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.885457 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.885463 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.885524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.911689 1055021 cri.go:89] found id: ""
	I1208 01:59:32.911716 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.911726 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.911732 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.911794 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:32.941141 1055021 cri.go:89] found id: ""
	I1208 01:59:32.941166 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.941175 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:32.941182 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:32.941244 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:32.970750 1055021 cri.go:89] found id: ""
	I1208 01:59:32.970771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.970779 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:32.970786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:32.970883 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:32.996768 1055021 cri.go:89] found id: ""
	I1208 01:59:32.996797 1055021 logs.go:282] 0 containers: []
	W1208 01:59:32.996806 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:32.996812 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:32.996887 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.025374 1055021 cri.go:89] found id: ""
	I1208 01:59:33.025410 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.025419 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.025448 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.025547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.051845 1055021 cri.go:89] found id: ""
	I1208 01:59:33.051878 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.051888 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.051895 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.051969 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.078543 1055021 cri.go:89] found id: ""
	I1208 01:59:33.078566 1055021 logs.go:282] 0 containers: []
	W1208 01:59:33.078575 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.078584 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.078597 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.096489 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.096518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.168941 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.160593    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.161311    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.162982    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.163490    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.165080    5883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.168962 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:33.168977 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:33.197574 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.197616 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:33.226563 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.226590 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:35.798966 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.810253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:35.810325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:35.835492 1055021 cri.go:89] found id: ""
	I1208 01:59:35.835516 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.835525 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:35.835534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:35.835593 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:35.861797 1055021 cri.go:89] found id: ""
	I1208 01:59:35.861823 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.861833 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:35.861839 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:35.861901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:35.887036 1055021 cri.go:89] found id: ""
	I1208 01:59:35.887073 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.887083 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:35.887090 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:35.887159 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:35.915379 1055021 cri.go:89] found id: ""
	I1208 01:59:35.915456 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.915478 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:35.915493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:35.915566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:35.940687 1055021 cri.go:89] found id: ""
	I1208 01:59:35.940714 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.940724 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:35.940730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:35.940839 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:35.967960 1055021 cri.go:89] found id: ""
	I1208 01:59:35.968038 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.968060 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:35.968074 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:35.968147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:35.993884 1055021 cri.go:89] found id: ""
	I1208 01:59:35.993927 1055021 logs.go:282] 0 containers: []
	W1208 01:59:35.993936 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:35.993942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:35.994012 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:36.027031 1055021 cri.go:89] found id: ""
	I1208 01:59:36.027056 1055021 logs.go:282] 0 containers: []
	W1208 01:59:36.027074 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:36.027084 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:36.027097 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:36.092294 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:36.083801    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.084280    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086037    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.086607    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:36.088237    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:36.092315 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:36.092330 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:36.120891 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:36.120927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:36.148475 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:36.148507 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:36.216306 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:36.216344 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:38.734253 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:38.744803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:38.744884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:38.777276 1055021 cri.go:89] found id: ""
	I1208 01:59:38.777305 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.777314 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:38.777320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:38.777379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:38.815858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.815894 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.815903 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:38.815909 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:38.815979 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:38.845051 1055021 cri.go:89] found id: ""
	I1208 01:59:38.845084 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.845093 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:38.845098 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:38.845164 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:38.870145 1055021 cri.go:89] found id: ""
	I1208 01:59:38.870178 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.870187 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:38.870193 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:38.870261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:38.897461 1055021 cri.go:89] found id: ""
	I1208 01:59:38.897489 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.897498 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:38.897505 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:38.897564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:38.923327 1055021 cri.go:89] found id: ""
	I1208 01:59:38.923351 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.923360 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:38.923367 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:38.923430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:38.949858 1055021 cri.go:89] found id: ""
	I1208 01:59:38.949884 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.949893 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:38.949899 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:38.949963 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:38.975805 1055021 cri.go:89] found id: ""
	I1208 01:59:38.975831 1055021 logs.go:282] 0 containers: []
	W1208 01:59:38.975840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:38.975849 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:38.975861 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:39.040102 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:39.040140 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:39.057980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:39.058045 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:39.129261 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:39.119922    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.120526    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122237    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122793    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124346    6112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:39.129281 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:39.129297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:39.157488 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:39.157524 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:41.687952 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:41.698803 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:41.698906 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:41.724062 1055021 cri.go:89] found id: ""
	I1208 01:59:41.724139 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.724171 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:41.724184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:41.724260 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:41.756674 1055021 cri.go:89] found id: ""
	I1208 01:59:41.756712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.756720 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:41.756727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:41.756797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:41.793181 1055021 cri.go:89] found id: ""
	I1208 01:59:41.793208 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.793217 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:41.793223 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:41.793289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:41.823566 1055021 cri.go:89] found id: ""
	I1208 01:59:41.823589 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.823597 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:41.823603 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:41.823660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:41.848188 1055021 cri.go:89] found id: ""
	I1208 01:59:41.848215 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.848224 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:41.848231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:41.848289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:41.874016 1055021 cri.go:89] found id: ""
	I1208 01:59:41.874053 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.874062 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:41.874068 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:41.874144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:41.901494 1055021 cri.go:89] found id: ""
	I1208 01:59:41.901517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.901525 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:41.901531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:41.901588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:41.927897 1055021 cri.go:89] found id: ""
	I1208 01:59:41.927919 1055021 logs.go:282] 0 containers: []
	W1208 01:59:41.927928 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:41.927936 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:41.927948 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:41.989449 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:41.980854    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.981680    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983354    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.983674    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:41.985164    6219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:41.989523 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:41.989543 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:42.035690 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:42.035724 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:42.065962 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:42.066011 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:42.136350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:42.136460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.657754 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:44.669949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:44.670036 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:44.700311 1055021 cri.go:89] found id: ""
	I1208 01:59:44.700341 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.700352 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:44.700358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:44.700422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:44.726358 1055021 cri.go:89] found id: ""
	I1208 01:59:44.726383 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.726392 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:44.726398 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:44.726461 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:44.761403 1055021 cri.go:89] found id: ""
	I1208 01:59:44.761430 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.761440 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:44.761447 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:44.761503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:44.792746 1055021 cri.go:89] found id: ""
	I1208 01:59:44.792771 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.792780 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:44.792786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:44.792845 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:44.822139 1055021 cri.go:89] found id: ""
	I1208 01:59:44.822170 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.822179 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:44.822185 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:44.822246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:44.848969 1055021 cri.go:89] found id: ""
	I1208 01:59:44.849036 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.849051 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:44.849060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:44.849123 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:44.877689 1055021 cri.go:89] found id: ""
	I1208 01:59:44.877712 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.877720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:44.877727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:44.877792 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:44.905370 1055021 cri.go:89] found id: ""
	I1208 01:59:44.905394 1055021 logs.go:282] 0 containers: []
	W1208 01:59:44.905403 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:44.905412 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:44.905424 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:44.923373 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:44.923410 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:44.995648 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:44.986466    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.987166    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.988948    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.989586    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:44.991267    6337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:44.995670 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:44.995684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:45.028693 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:45.028744 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:45.080489 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:45.080534 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:47.697315 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:47.707837 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:47.707910 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:47.731910 1055021 cri.go:89] found id: ""
	I1208 01:59:47.731934 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.731943 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:47.731950 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:47.732009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:47.765844 1055021 cri.go:89] found id: ""
	I1208 01:59:47.765869 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.765887 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:47.765894 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:47.765955 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:47.805305 1055021 cri.go:89] found id: ""
	I1208 01:59:47.805328 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.805342 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:47.805349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:47.805407 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:47.832547 1055021 cri.go:89] found id: ""
	I1208 01:59:47.832572 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.832581 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:47.832587 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:47.832646 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:47.857492 1055021 cri.go:89] found id: ""
	I1208 01:59:47.857517 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.857526 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:47.857533 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:47.857595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:47.885564 1055021 cri.go:89] found id: ""
	I1208 01:59:47.885591 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.885599 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:47.885606 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:47.885668 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:47.914630 1055021 cri.go:89] found id: ""
	I1208 01:59:47.914655 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.914664 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:47.914671 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:47.914737 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:47.944185 1055021 cri.go:89] found id: ""
	I1208 01:59:47.944216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:47.944226 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:47.944236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:47.944247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:47.973585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:47.973622 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:48.011189 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:48.011218 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:48.078148 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:48.078187 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:48.098135 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:48.098167 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:48.174366 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:48.165720    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.166426    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168073    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.168423    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:48.169953    6460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:50.674625 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:50.685161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:50.685235 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:50.712131 1055021 cri.go:89] found id: ""
	I1208 01:59:50.712158 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.712167 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:50.712175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:50.712236 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:50.741188 1055021 cri.go:89] found id: ""
	I1208 01:59:50.741216 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.741224 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:50.741231 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:50.741325 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:50.778993 1055021 cri.go:89] found id: ""
	I1208 01:59:50.779016 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.779026 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:50.779034 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:50.779103 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:50.820444 1055021 cri.go:89] found id: ""
	I1208 01:59:50.820477 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.820487 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:50.820494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:50.820552 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:50.845727 1055021 cri.go:89] found id: ""
	I1208 01:59:50.845752 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.845761 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:50.845768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:50.845833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:50.875375 1055021 cri.go:89] found id: ""
	I1208 01:59:50.875398 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.875406 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:50.875412 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:50.875472 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:50.899812 1055021 cri.go:89] found id: ""
	I1208 01:59:50.899836 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.899846 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:50.899852 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:50.899911 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:50.925692 1055021 cri.go:89] found id: ""
	I1208 01:59:50.925717 1055021 logs.go:282] 0 containers: []
	W1208 01:59:50.925725 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:50.925735 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:50.925751 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:50.991330 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:50.991366 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:51.010240 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:51.010276 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:51.075773 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:51.066579    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.067361    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069203    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.069940    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:51.071756    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:51.075801 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:51.075813 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:51.104705 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:51.104737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:53.634984 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:53.645378 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:53.645451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:53.676623 1055021 cri.go:89] found id: ""
	I1208 01:59:53.676647 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.676657 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:53.676664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:53.676723 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:53.700948 1055021 cri.go:89] found id: ""
	I1208 01:59:53.700973 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.700982 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:53.700988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:53.701047 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:53.725665 1055021 cri.go:89] found id: ""
	I1208 01:59:53.725689 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.725698 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:53.725704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:53.725760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:53.750770 1055021 cri.go:89] found id: ""
	I1208 01:59:53.750794 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.750803 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:53.750809 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:53.750885 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:53.784279 1055021 cri.go:89] found id: ""
	I1208 01:59:53.784304 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.784312 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:53.784319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:53.784378 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:53.812355 1055021 cri.go:89] found id: ""
	I1208 01:59:53.812381 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.812390 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:53.812396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:53.812456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:53.837608 1055021 cri.go:89] found id: ""
	I1208 01:59:53.837634 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.837642 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:53.837648 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:53.837709 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:53.863046 1055021 cri.go:89] found id: ""
	I1208 01:59:53.863076 1055021 logs.go:282] 0 containers: []
	W1208 01:59:53.863085 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:53.863095 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:53.863136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:53.928268 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:53.928309 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:53.945830 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:53.945860 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:54.012382 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:54.002168    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.003441    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.004593    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.005541    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:54.007933    6670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:54.012407 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:54.012447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:54.043446 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:54.043481 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:56.571785 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:56.582156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:56.582228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:56.611270 1055021 cri.go:89] found id: ""
	I1208 01:59:56.611292 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.611301 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:56.611307 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:56.611371 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:56.638765 1055021 cri.go:89] found id: ""
	I1208 01:59:56.638788 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.638797 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:56.638802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:56.638888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:56.663341 1055021 cri.go:89] found id: ""
	I1208 01:59:56.663368 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.663377 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:56.663383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:56.663495 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:56.688606 1055021 cri.go:89] found id: ""
	I1208 01:59:56.688633 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.688643 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:56.688649 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:56.688730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:56.714263 1055021 cri.go:89] found id: ""
	I1208 01:59:56.714287 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.714296 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:56.714303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:56.714379 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:56.738023 1055021 cri.go:89] found id: ""
	I1208 01:59:56.738047 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.738056 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:56.738062 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:56.738141 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:56.767926 1055021 cri.go:89] found id: ""
	I1208 01:59:56.767951 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.767960 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:56.767966 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:56.768071 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:56.801241 1055021 cri.go:89] found id: ""
	I1208 01:59:56.801268 1055021 logs.go:282] 0 containers: []
	W1208 01:59:56.801277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:56.801286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:56.801317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:56.873621 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:56.873657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:56.891086 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:56.891116 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:56.956286 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:56.948037    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.948565    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950145    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.950717    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:56.952225    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:56.956306 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:56.956319 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:56.991921 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:56.991965 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.538010 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:59.548530 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:59.548598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:59.574677 1055021 cri.go:89] found id: ""
	I1208 01:59:59.574701 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.574709 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:59.574716 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 01:59:59.574779 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:59.600311 1055021 cri.go:89] found id: ""
	I1208 01:59:59.600337 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.600346 1055021 logs.go:284] No container was found matching "etcd"
	I1208 01:59:59.600352 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 01:59:59.600410 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:59.627833 1055021 cri.go:89] found id: ""
	I1208 01:59:59.627858 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.627867 1055021 logs.go:284] No container was found matching "coredns"
	I1208 01:59:59.627873 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:59.627946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:59.652005 1055021 cri.go:89] found id: ""
	I1208 01:59:59.652029 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.652038 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:59.652044 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:59.652138 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:59.676487 1055021 cri.go:89] found id: ""
	I1208 01:59:59.676511 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.676519 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:59.676525 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:59.676581 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:59.701988 1055021 cri.go:89] found id: ""
	I1208 01:59:59.702012 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.702020 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:59.702027 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:59.702085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:59.726000 1055021 cri.go:89] found id: ""
	I1208 01:59:59.726025 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.726034 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:59.726040 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:59.726100 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:59.751097 1055021 cri.go:89] found id: ""
	I1208 01:59:59.751123 1055021 logs.go:282] 0 containers: []
	W1208 01:59:59.751131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:59.751141 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:59.751154 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:59.832931 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:59.824301    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.825096    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.826704    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.827293    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:59.828983    6892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:59.832954 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 01:59:59.832966 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 01:59:59.862055 1055021 logs.go:123] Gathering logs for container status ...
	I1208 01:59:59.862089 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:59.890385 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:59.890414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:59.959793 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:59.959825 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.477852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:02.489201 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:02.489312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:02.516698 1055021 cri.go:89] found id: ""
	I1208 02:00:02.516725 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.516734 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:02.516741 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:02.516825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:02.545938 1055021 cri.go:89] found id: ""
	I1208 02:00:02.545965 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.545974 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:02.545980 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:02.546051 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:02.574765 1055021 cri.go:89] found id: ""
	I1208 02:00:02.574799 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.574808 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:02.574815 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:02.574920 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:02.600958 1055021 cri.go:89] found id: ""
	I1208 02:00:02.600984 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.600992 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:02.601001 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:02.601061 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:02.627836 1055021 cri.go:89] found id: ""
	I1208 02:00:02.627862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.627872 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:02.627879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:02.627942 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:02.654803 1055021 cri.go:89] found id: ""
	I1208 02:00:02.654831 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.654864 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:02.654872 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:02.654938 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:02.682455 1055021 cri.go:89] found id: ""
	I1208 02:00:02.682487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.682503 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:02.682510 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:02.682577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:02.709680 1055021 cri.go:89] found id: ""
	I1208 02:00:02.709709 1055021 logs.go:282] 0 containers: []
	W1208 02:00:02.709718 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:02.709728 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:02.709741 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:02.776682 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:02.776761 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:02.795697 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:02.795794 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:02.873752 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:02.864663    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.865270    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867028    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.867571    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:02.869396    7012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:02.873773 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:02.873787 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:02.903468 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:02.903511 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.438786 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:05.449615 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:05.449691 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:05.475122 1055021 cri.go:89] found id: ""
	I1208 02:00:05.475147 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.475156 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:05.475162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:05.475223 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:05.500749 1055021 cri.go:89] found id: ""
	I1208 02:00:05.500772 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.500781 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:05.500788 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:05.500854 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:05.526357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.526435 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.526456 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:05.526475 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:05.526564 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:05.553466 1055021 cri.go:89] found id: ""
	I1208 02:00:05.553493 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.553502 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:05.553509 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:05.553570 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:05.583119 1055021 cri.go:89] found id: ""
	I1208 02:00:05.583145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.583154 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:05.583161 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:05.583229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:05.613357 1055021 cri.go:89] found id: ""
	I1208 02:00:05.613385 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.613394 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:05.613401 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:05.613465 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:05.639303 1055021 cri.go:89] found id: ""
	I1208 02:00:05.639328 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.639337 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:05.639358 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:05.639422 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:05.666333 1055021 cri.go:89] found id: ""
	I1208 02:00:05.666372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:05.666382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:05.666392 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:05.666405 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:05.696869 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:05.696901 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:05.762499 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:05.762536 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:05.780857 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:05.780889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:05.848522 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:05.840229    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.840814    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.842374    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.843126    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:05.844227    7134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:05.848585 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:05.848598 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.377424 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:08.388192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:08.388265 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:08.414029 1055021 cri.go:89] found id: ""
	I1208 02:00:08.414050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.414059 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:08.414065 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:08.414127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:08.441760 1055021 cri.go:89] found id: ""
	I1208 02:00:08.441782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.441790 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:08.441796 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:08.441857 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:08.466751 1055021 cri.go:89] found id: ""
	I1208 02:00:08.466774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.466783 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:08.466789 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:08.466870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:08.493249 1055021 cri.go:89] found id: ""
	I1208 02:00:08.493272 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.493280 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:08.493287 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:08.493345 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:08.519677 1055021 cri.go:89] found id: ""
	I1208 02:00:08.519707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.519716 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:08.519722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:08.519788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:08.545435 1055021 cri.go:89] found id: ""
	I1208 02:00:08.545460 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.545469 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:08.545476 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:08.545538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:08.576588 1055021 cri.go:89] found id: ""
	I1208 02:00:08.576612 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.576621 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:08.576628 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:08.576719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:08.602665 1055021 cri.go:89] found id: ""
	I1208 02:00:08.602689 1055021 logs.go:282] 0 containers: []
	W1208 02:00:08.602697 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:08.602706 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:08.602737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:08.668015 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:08.668065 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:08.685174 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:08.685203 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:08.750092 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:08.741299    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.742048    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.743812    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.744405    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:08.746212    7232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:08.750113 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:08.750127 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:08.781244 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:08.781278 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.323549 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:11.333988 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:11.334059 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:11.359294 1055021 cri.go:89] found id: ""
	I1208 02:00:11.359316 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.359325 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:11.359331 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:11.359391 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:11.385252 1055021 cri.go:89] found id: ""
	I1208 02:00:11.385274 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.385283 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:11.385289 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:11.385354 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:11.411462 1055021 cri.go:89] found id: ""
	I1208 02:00:11.411485 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.411494 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:11.411501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:11.411560 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:11.437020 1055021 cri.go:89] found id: ""
	I1208 02:00:11.437043 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.437052 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:11.437059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:11.437142 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:11.462749 1055021 cri.go:89] found id: ""
	I1208 02:00:11.462774 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.462788 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:11.462795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:11.462912 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:11.487618 1055021 cri.go:89] found id: ""
	I1208 02:00:11.487642 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.487650 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:11.487656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:11.487738 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:11.517338 1055021 cri.go:89] found id: ""
	I1208 02:00:11.517411 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.517435 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:11.517454 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:11.517582 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:11.543576 1055021 cri.go:89] found id: ""
	I1208 02:00:11.543608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:11.543618 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:11.543670 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:11.543687 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:11.605714 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:11.597274    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.597933    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.599472    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.600169    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:11.601767    7337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:11.605738 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:11.605754 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:11.634573 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:11.634608 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:11.663270 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:11.663297 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:11.728036 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:11.728073 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.245900 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:14.259346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:14.259447 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:14.292891 1055021 cri.go:89] found id: ""
	I1208 02:00:14.292913 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.292922 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:14.292928 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:14.292995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:14.326384 1055021 cri.go:89] found id: ""
	I1208 02:00:14.326408 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.326418 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:14.326425 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:14.326485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:14.354623 1055021 cri.go:89] found id: ""
	I1208 02:00:14.354646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.354654 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:14.354660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:14.354719 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:14.382160 1055021 cri.go:89] found id: ""
	I1208 02:00:14.382187 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.382196 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:14.382203 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:14.382261 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:14.408072 1055021 cri.go:89] found id: ""
	I1208 02:00:14.408141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.408166 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:14.408184 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:14.408273 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:14.433739 1055021 cri.go:89] found id: ""
	I1208 02:00:14.433767 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.433776 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:14.433783 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:14.433889 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:14.460882 1055021 cri.go:89] found id: ""
	I1208 02:00:14.460906 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.460914 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:14.460921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:14.461002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:14.486630 1055021 cri.go:89] found id: ""
	I1208 02:00:14.486707 1055021 logs.go:282] 0 containers: []
	W1208 02:00:14.486732 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:14.486755 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:14.486781 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:14.552732 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:14.552769 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:14.570940 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:14.570975 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:14.636277 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:14.628043    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.628541    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.629996    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.630379    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:14.631793    7454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:14.636301 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:14.636317 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:14.664410 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:14.664447 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:17.192894 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:17.203129 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:17.203200 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:17.228497 1055021 cri.go:89] found id: ""
	I1208 02:00:17.228519 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.228528 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:17.228534 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:17.228598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:17.253841 1055021 cri.go:89] found id: ""
	I1208 02:00:17.253862 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.253871 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:17.253887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:17.253945 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:17.284067 1055021 cri.go:89] found id: ""
	I1208 02:00:17.284088 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.284097 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:17.284103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:17.284162 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:17.320641 1055021 cri.go:89] found id: ""
	I1208 02:00:17.320668 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.320678 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:17.320684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:17.320748 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:17.347071 1055021 cri.go:89] found id: ""
	I1208 02:00:17.347094 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.347103 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:17.347109 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:17.347227 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:17.373328 1055021 cri.go:89] found id: ""
	I1208 02:00:17.373357 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.373366 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:17.373372 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:17.373439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:17.400408 1055021 cri.go:89] found id: ""
	I1208 02:00:17.400437 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.400446 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:17.400456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:17.400515 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:17.426232 1055021 cri.go:89] found id: ""
	I1208 02:00:17.426268 1055021 logs.go:282] 0 containers: []
	W1208 02:00:17.426277 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:17.426286 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:17.426298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:17.491052 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:17.491092 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:17.509546 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:17.509575 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:17.578008 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:17.569570    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.570278    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.571915    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.572524    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:17.573733    7567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:17.578068 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:17.578090 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:17.606330 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:17.606368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:20.139003 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:20.149823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:20.149894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:20.176541 1055021 cri.go:89] found id: ""
	I1208 02:00:20.176568 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.176577 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:20.176583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:20.176647 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:20.209117 1055021 cri.go:89] found id: ""
	I1208 02:00:20.209141 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.209149 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:20.209156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:20.209222 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:20.235819 1055021 cri.go:89] found id: ""
	I1208 02:00:20.235846 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.235861 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:20.235867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:20.235933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:20.268968 1055021 cri.go:89] found id: ""
	I1208 02:00:20.268997 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.269006 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:20.269019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:20.269079 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:20.302684 1055021 cri.go:89] found id: ""
	I1208 02:00:20.302712 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.302721 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:20.302728 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:20.302814 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:20.330459 1055021 cri.go:89] found id: ""
	I1208 02:00:20.330535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.330550 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:20.330557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:20.330632 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:20.358743 1055021 cri.go:89] found id: ""
	I1208 02:00:20.358778 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.358787 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:20.358793 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:20.358881 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:20.384853 1055021 cri.go:89] found id: ""
	I1208 02:00:20.384883 1055021 logs.go:282] 0 containers: []
	W1208 02:00:20.384892 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:20.384909 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:20.384921 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:20.450466 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:20.450505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:20.468842 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:20.468872 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:20.533689 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:20.524668    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.525327    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527317    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.527773    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:20.529286    7679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:20.533717 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:20.533732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:20.561211 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:20.561245 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.093217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:23.103855 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:23.103935 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:23.129008 1055021 cri.go:89] found id: ""
	I1208 02:00:23.129084 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.129113 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:23.129122 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:23.129192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:23.154045 1055021 cri.go:89] found id: ""
	I1208 02:00:23.154071 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.154079 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:23.154086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:23.154144 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:23.179982 1055021 cri.go:89] found id: ""
	I1208 02:00:23.180009 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.180018 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:23.180025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:23.180085 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:23.205725 1055021 cri.go:89] found id: ""
	I1208 02:00:23.205751 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.205760 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:23.205767 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:23.205825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:23.233180 1055021 cri.go:89] found id: ""
	I1208 02:00:23.233206 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.233214 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:23.233221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:23.233280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:23.260814 1055021 cri.go:89] found id: ""
	I1208 02:00:23.260841 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.260850 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:23.260856 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:23.260915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:23.289337 1055021 cri.go:89] found id: ""
	I1208 02:00:23.289369 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.289379 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:23.289384 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:23.289451 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:23.326356 1055021 cri.go:89] found id: ""
	I1208 02:00:23.326383 1055021 logs.go:282] 0 containers: []
	W1208 02:00:23.326392 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:23.326401 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:23.326414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:23.344175 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:23.344207 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:23.409693 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:23.401304    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.401746    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.403607    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.404137    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:23.405745    7792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:23.409767 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:23.409793 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:23.437814 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:23.437848 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:23.472006 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:23.472034 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.036954 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:26.050218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:26.050295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:26.084077 1055021 cri.go:89] found id: ""
	I1208 02:00:26.084101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.084110 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:26.084117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:26.084179 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:26.115433 1055021 cri.go:89] found id: ""
	I1208 02:00:26.115458 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.115467 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:26.115473 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:26.115548 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:26.142798 1055021 cri.go:89] found id: ""
	I1208 02:00:26.142821 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.142829 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:26.142836 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:26.142923 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:26.169427 1055021 cri.go:89] found id: ""
	I1208 02:00:26.169449 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.169457 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:26.169465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:26.169523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:26.196837 1055021 cri.go:89] found id: ""
	I1208 02:00:26.196863 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.196873 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:26.196879 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:26.196940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:26.222671 1055021 cri.go:89] found id: ""
	I1208 02:00:26.222694 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.222702 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:26.222709 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:26.222770 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:26.258674 1055021 cri.go:89] found id: ""
	I1208 02:00:26.258696 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.258705 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:26.258711 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:26.258769 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:26.297463 1055021 cri.go:89] found id: ""
	I1208 02:00:26.297486 1055021 logs.go:282] 0 containers: []
	W1208 02:00:26.297496 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:26.297505 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:26.297520 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:26.329140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:26.329223 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:26.359625 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:26.359657 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:26.424937 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:26.424974 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:26.443260 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:26.443293 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:26.509592 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:26.501183    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.502031    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503663    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.503972    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:26.505467    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.010492 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:29.023086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:29.023160 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:29.051358 1055021 cri.go:89] found id: ""
	I1208 02:00:29.051380 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.051389 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:29.051395 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:29.051456 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:29.085536 1055021 cri.go:89] found id: ""
	I1208 02:00:29.085566 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.085575 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:29.085583 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:29.085649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:29.114380 1055021 cri.go:89] found id: ""
	I1208 02:00:29.114407 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.114416 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:29.114422 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:29.114483 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:29.139608 1055021 cri.go:89] found id: ""
	I1208 02:00:29.139697 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.139713 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:29.139722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:29.139800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:29.167030 1055021 cri.go:89] found id: ""
	I1208 02:00:29.167055 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.167100 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:29.167107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:29.167173 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:29.191898 1055021 cri.go:89] found id: ""
	I1208 02:00:29.191920 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.191929 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:29.191935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:29.191992 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:29.216839 1055021 cri.go:89] found id: ""
	I1208 02:00:29.216870 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.216879 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:29.216889 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:29.216975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:29.246347 1055021 cri.go:89] found id: ""
	I1208 02:00:29.246372 1055021 logs.go:282] 0 containers: []
	W1208 02:00:29.246382 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:29.246391 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:29.246421 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:29.266473 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:29.266509 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:29.345611 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:29.337007    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.337701    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339388    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.339926    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:29.341504    8017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:29.345636 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:29.345648 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:29.375020 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:29.375060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:29.402360 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:29.402386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:31.967515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:31.978076 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:31.978147 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:32.018381 1055021 cri.go:89] found id: ""
	I1208 02:00:32.018457 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.018480 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:32.018500 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:32.018611 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:32.054678 1055021 cri.go:89] found id: ""
	I1208 02:00:32.054700 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.054709 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:32.054715 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:32.054775 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:32.085659 1055021 cri.go:89] found id: ""
	I1208 02:00:32.085686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.085695 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:32.085701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:32.085809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:32.112827 1055021 cri.go:89] found id: ""
	I1208 02:00:32.112892 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.112907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:32.112914 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:32.112973 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:32.141486 1055021 cri.go:89] found id: ""
	I1208 02:00:32.141513 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.141521 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:32.141527 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:32.141591 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:32.166463 1055021 cri.go:89] found id: ""
	I1208 02:00:32.166489 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.166498 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:32.166504 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:32.166566 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:32.196018 1055021 cri.go:89] found id: ""
	I1208 02:00:32.196086 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.196111 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:32.196125 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:32.196198 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:32.219763 1055021 cri.go:89] found id: ""
	I1208 02:00:32.219802 1055021 logs.go:282] 0 containers: []
	W1208 02:00:32.219812 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:32.219821 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:32.219834 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:32.237401 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:32.237431 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:32.335697 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:32.326640    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.327342    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.328958    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.329504    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:32.331131    8136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:32.335720 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:32.335732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:32.364998 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:32.365043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:32.394072 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:32.394099 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:34.958230 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:34.968535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:34.968606 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:34.993490 1055021 cri.go:89] found id: ""
	I1208 02:00:34.993515 1055021 logs.go:282] 0 containers: []
	W1208 02:00:34.993524 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:34.993531 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:34.993588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:35.026482 1055021 cri.go:89] found id: ""
	I1208 02:00:35.026511 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.026521 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:35.026529 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:35.026595 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:35.062109 1055021 cri.go:89] found id: ""
	I1208 02:00:35.062138 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.062147 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:35.062154 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:35.062218 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:35.094672 1055021 cri.go:89] found id: ""
	I1208 02:00:35.094706 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.094715 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:35.094722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:35.094784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:35.120981 1055021 cri.go:89] found id: ""
	I1208 02:00:35.121007 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.121016 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:35.121022 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:35.121087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:35.147283 1055021 cri.go:89] found id: ""
	I1208 02:00:35.147310 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.147321 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:35.147329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:35.147392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:35.174946 1055021 cri.go:89] found id: ""
	I1208 02:00:35.175038 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.175075 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:35.175115 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:35.175224 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:35.205558 1055021 cri.go:89] found id: ""
	I1208 02:00:35.205583 1055021 logs.go:282] 0 containers: []
	W1208 02:00:35.205592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:35.205601 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:35.205636 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:35.273454 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:35.273537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:35.294102 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:35.294182 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:35.363206 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:35.354462    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.354947    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.356742    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.357669    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:35.358493    8252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:35.363227 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:35.363240 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:35.391418 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:35.391457 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:37.922946 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:37.933320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:37.933392 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:37.959213 1055021 cri.go:89] found id: ""
	I1208 02:00:37.959237 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.959247 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:37.959253 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:37.959311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:37.983822 1055021 cri.go:89] found id: ""
	I1208 02:00:37.983844 1055021 logs.go:282] 0 containers: []
	W1208 02:00:37.983853 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:37.983859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:37.983917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:38.015881 1055021 cri.go:89] found id: ""
	I1208 02:00:38.015909 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.015919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:38.015927 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:38.015994 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:38.047948 1055021 cri.go:89] found id: ""
	I1208 02:00:38.047971 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.047979 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:38.047985 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:38.048049 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:38.098187 1055021 cri.go:89] found id: ""
	I1208 02:00:38.098216 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.098227 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:38.098234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:38.098298 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:38.122930 1055021 cri.go:89] found id: ""
	I1208 02:00:38.122952 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.122960 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:38.122967 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:38.123028 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:38.148405 1055021 cri.go:89] found id: ""
	I1208 02:00:38.148439 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.148449 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:38.148455 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:38.148513 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:38.174446 1055021 cri.go:89] found id: ""
	I1208 02:00:38.174522 1055021 logs.go:282] 0 containers: []
	W1208 02:00:38.174544 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:38.174565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:38.174602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:38.239470 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:38.239505 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:38.257924 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:38.258079 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:38.328235 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:38.319284    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.319867    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.321832    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.322590    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:38.324240    8367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:38.328302 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:38.328321 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:38.356585 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:38.356619 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:40.887527 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:40.897939 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:40.898011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:40.922663 1055021 cri.go:89] found id: ""
	I1208 02:00:40.922686 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.922695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:40.922701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:40.922760 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:40.947304 1055021 cri.go:89] found id: ""
	I1208 02:00:40.947371 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.947397 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:40.947409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:40.947484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:40.973263 1055021 cri.go:89] found id: ""
	I1208 02:00:40.973290 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.973299 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:40.973305 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:40.973365 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:40.998615 1055021 cri.go:89] found id: ""
	I1208 02:00:40.998648 1055021 logs.go:282] 0 containers: []
	W1208 02:00:40.998658 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:40.998665 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:40.998735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:41.034153 1055021 cri.go:89] found id: ""
	I1208 02:00:41.034180 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.034190 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:41.034196 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:41.034255 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:41.063886 1055021 cri.go:89] found id: ""
	I1208 02:00:41.063916 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.063925 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:41.063931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:41.063993 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:41.090937 1055021 cri.go:89] found id: ""
	I1208 02:00:41.090966 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.090976 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:41.090982 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:41.091046 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:41.117814 1055021 cri.go:89] found id: ""
	I1208 02:00:41.117839 1055021 logs.go:282] 0 containers: []
	W1208 02:00:41.117849 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:41.117858 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:41.117870 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:41.182312 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:41.182348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:41.200044 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:41.200071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:41.273066 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:41.263718    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.264521    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266156    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.266459    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:41.268826    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:41.273095 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:41.273108 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:41.308256 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:41.308298 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:43.843380 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:43.854135 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:43.854204 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:43.879332 1055021 cri.go:89] found id: ""
	I1208 02:00:43.879356 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.879365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:43.879371 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:43.879431 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:43.903897 1055021 cri.go:89] found id: ""
	I1208 02:00:43.903921 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.903930 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:43.903935 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:43.904010 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:43.928349 1055021 cri.go:89] found id: ""
	I1208 02:00:43.928377 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.928386 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:43.928396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:43.928453 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:43.957013 1055021 cri.go:89] found id: ""
	I1208 02:00:43.957046 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.957060 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:43.957066 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:43.957137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:43.981711 1055021 cri.go:89] found id: ""
	I1208 02:00:43.981784 1055021 logs.go:282] 0 containers: []
	W1208 02:00:43.981819 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:43.981843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:43.981933 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:44.021808 1055021 cri.go:89] found id: ""
	I1208 02:00:44.021842 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.021851 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:44.021859 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:44.021940 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:44.053536 1055021 cri.go:89] found id: ""
	I1208 02:00:44.053608 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.053631 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:44.053650 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:44.053735 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:44.087893 1055021 cri.go:89] found id: ""
	I1208 02:00:44.087958 1055021 logs.go:282] 0 containers: []
	W1208 02:00:44.087975 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:44.087985 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:44.087997 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:44.153453 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:44.153493 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:44.172720 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:44.172750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:44.242553 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:44.233918    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.234573    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236179    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.236703    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:44.237849    8592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:44.242575 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:44.242587 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:44.273804 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:44.273889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:46.805601 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:46.815929 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:46.815999 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:46.840623 1055021 cri.go:89] found id: ""
	I1208 02:00:46.840646 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.840655 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:46.840661 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:46.840721 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:46.866056 1055021 cri.go:89] found id: ""
	I1208 02:00:46.866082 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.866090 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:46.866096 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:46.866156 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:46.890598 1055021 cri.go:89] found id: ""
	I1208 02:00:46.890623 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.890632 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:46.890638 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:46.890699 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:46.917031 1055021 cri.go:89] found id: ""
	I1208 02:00:46.917101 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.917125 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:46.917142 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:46.917230 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:46.941427 1055021 cri.go:89] found id: ""
	I1208 02:00:46.941450 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.941459 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:46.941465 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:46.941524 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:46.971991 1055021 cri.go:89] found id: ""
	I1208 02:00:46.972015 1055021 logs.go:282] 0 containers: []
	W1208 02:00:46.972024 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:46.972031 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:46.972087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:47.000365 1055021 cri.go:89] found id: ""
	I1208 02:00:47.000393 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.000402 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:47.000409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:47.000500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:47.039853 1055021 cri.go:89] found id: ""
	I1208 02:00:47.039934 1055021 logs.go:282] 0 containers: []
	W1208 02:00:47.039968 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:47.040014 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:47.040070 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:47.124159 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:47.124199 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:47.142393 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:47.142436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:47.204667 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:47.196257    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.196997    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.198491    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.199077    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:47.200630    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:47.204688 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:47.204700 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:47.233531 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:47.233572 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:49.777314 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:49.787953 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:49.788027 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:49.814344 1055021 cri.go:89] found id: ""
	I1208 02:00:49.814368 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.814376 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:49.814383 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:49.814443 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:49.843148 1055021 cri.go:89] found id: ""
	I1208 02:00:49.843172 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.843180 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:49.843187 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:49.843245 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:49.868221 1055021 cri.go:89] found id: ""
	I1208 02:00:49.868245 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.868253 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:49.868260 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:49.868319 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:49.892756 1055021 cri.go:89] found id: ""
	I1208 02:00:49.892782 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.892792 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:49.892799 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:49.892879 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:49.921697 1055021 cri.go:89] found id: ""
	I1208 02:00:49.921730 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.921738 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:49.921745 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:49.921818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:49.946935 1055021 cri.go:89] found id: ""
	I1208 02:00:49.947000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.947018 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:49.947025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:49.947102 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:49.972386 1055021 cri.go:89] found id: ""
	I1208 02:00:49.972410 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.972418 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:49.972427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:49.972485 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:49.997299 1055021 cri.go:89] found id: ""
	I1208 02:00:49.997324 1055021 logs.go:282] 0 containers: []
	W1208 02:00:49.997332 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:49.997342 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:49.997354 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:50.024427 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:50.024465 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:50.106428 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:50.097679    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.098298    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.099821    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.100337    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:50.101870    8814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:50.106452 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:50.106466 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:50.134825 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:50.134944 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:50.164257 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:50.164286 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:52.731852 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:52.743466 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:52.743547 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:52.770730 1055021 cri.go:89] found id: ""
	I1208 02:00:52.770754 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.770763 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:52.770769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:52.770837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:52.795524 1055021 cri.go:89] found id: ""
	I1208 02:00:52.795547 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.795555 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:52.795562 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:52.795622 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:52.820947 1055021 cri.go:89] found id: ""
	I1208 02:00:52.820976 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.820986 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:52.820993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:52.821054 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:52.846461 1055021 cri.go:89] found id: ""
	I1208 02:00:52.846487 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.846495 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:52.846502 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:52.846614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:52.876556 1055021 cri.go:89] found id: ""
	I1208 02:00:52.876582 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.876591 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:52.876598 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:52.876658 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:52.902890 1055021 cri.go:89] found id: ""
	I1208 02:00:52.902915 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.902924 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:52.902931 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:52.902995 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:52.927861 1055021 cri.go:89] found id: ""
	I1208 02:00:52.927936 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.927952 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:52.927960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:52.928018 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:52.952070 1055021 cri.go:89] found id: ""
	I1208 02:00:52.952093 1055021 logs.go:282] 0 containers: []
	W1208 02:00:52.952102 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:52.952111 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:52.952123 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:52.969988 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:52.970071 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:53.047400 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:53.035709    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.036594    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.039517    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041404    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:53.041686    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:53.047420 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:53.047432 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:53.079007 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:53.079096 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:53.110493 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:53.110518 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:55.678655 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:55.689237 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:55.689308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:55.716663 1055021 cri.go:89] found id: ""
	I1208 02:00:55.716685 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.716694 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:55.716700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:55.716767 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:55.742016 1055021 cri.go:89] found id: ""
	I1208 02:00:55.742042 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.742051 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:55.742057 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:55.742117 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:55.771093 1055021 cri.go:89] found id: ""
	I1208 02:00:55.771116 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.771125 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:55.771131 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:55.771192 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:55.795221 1055021 cri.go:89] found id: ""
	I1208 02:00:55.795243 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.795252 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:55.795258 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:55.795321 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:55.824380 1055021 cri.go:89] found id: ""
	I1208 02:00:55.824402 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.824411 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:55.824417 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:55.824482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:55.853339 1055021 cri.go:89] found id: ""
	I1208 02:00:55.853362 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.853370 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:55.853376 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:55.853439 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:55.879120 1055021 cri.go:89] found id: ""
	I1208 02:00:55.879145 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.879154 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:55.879160 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:55.879229 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:55.904782 1055021 cri.go:89] found id: ""
	I1208 02:00:55.904811 1055021 logs.go:282] 0 containers: []
	W1208 02:00:55.904820 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:55.904829 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:55.904840 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:00:55.936603 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:55.936627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:56.002394 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:56.002436 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:56.025805 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:56.025962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:56.100621 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:56.092950    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.093347    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095012    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.095348    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:56.096798    9053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:56.100643 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:56.100655 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:58.632608 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:00:58.643205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:00:58.643281 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:00:58.668717 1055021 cri.go:89] found id: ""
	I1208 02:00:58.668741 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.668750 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:00:58.668756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:00:58.668818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:00:58.693510 1055021 cri.go:89] found id: ""
	I1208 02:00:58.693535 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.693543 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:00:58.693550 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:00:58.693614 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:00:58.718959 1055021 cri.go:89] found id: ""
	I1208 02:00:58.719050 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.719071 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:00:58.719079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:00:58.719153 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:00:58.743668 1055021 cri.go:89] found id: ""
	I1208 02:00:58.743691 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.743700 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:00:58.743707 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:00:58.743765 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:00:58.772612 1055021 cri.go:89] found id: ""
	I1208 02:00:58.772679 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.772700 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:00:58.772718 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:00:58.772809 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:00:58.798178 1055021 cri.go:89] found id: ""
	I1208 02:00:58.798204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.798212 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:00:58.798218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:00:58.798278 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:00:58.822926 1055021 cri.go:89] found id: ""
	I1208 02:00:58.823000 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.823018 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:00:58.823026 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:00:58.823097 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:00:58.849170 1055021 cri.go:89] found id: ""
	I1208 02:00:58.849204 1055021 logs.go:282] 0 containers: []
	W1208 02:00:58.849214 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:00:58.849249 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:00:58.849273 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:00:58.916845 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:00:58.916884 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:00:58.934980 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:00:58.935008 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:00:59.004330 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:00:58.994624    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.995145    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.996690    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.997066    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:00:58.998761    9148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:00:59.004355 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:00:59.004368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:00:59.034521 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:00:59.034558 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.569349 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:01.581275 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:01.581356 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:01.614013 1055021 cri.go:89] found id: ""
	I1208 02:01:01.614040 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.614052 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:01.614059 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:01.614120 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:01.642283 1055021 cri.go:89] found id: ""
	I1208 02:01:01.642311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.642321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:01.642327 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:01.642388 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:01.668888 1055021 cri.go:89] found id: ""
	I1208 02:01:01.668916 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.668927 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:01.668933 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:01.669045 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:01.696848 1055021 cri.go:89] found id: ""
	I1208 02:01:01.696890 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.696917 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:01.696924 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:01.697002 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:01.724280 1055021 cri.go:89] found id: ""
	I1208 02:01:01.724314 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.724323 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:01.724329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:01.724397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:01.757961 1055021 cri.go:89] found id: ""
	I1208 02:01:01.757993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.758002 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:01.758009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:01.758076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:01.791626 1055021 cri.go:89] found id: ""
	I1208 02:01:01.791652 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.791663 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:01.791669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:01.791734 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:01.824543 1055021 cri.go:89] found id: ""
	I1208 02:01:01.824614 1055021 logs.go:282] 0 containers: []
	W1208 02:01:01.824631 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:01.824643 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:01.824656 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:01.858339 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:01.858368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:01.923001 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:01.923043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:01.942107 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:01.942139 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:02.016342 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:02.005020    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.006725    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.007722    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.009771    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:02.010158    9273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:02.016379 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:02.016393 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.550723 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:04.561389 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:04.561458 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:04.587293 1055021 cri.go:89] found id: ""
	I1208 02:01:04.587319 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.587329 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:04.587335 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:04.587398 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:04.612287 1055021 cri.go:89] found id: ""
	I1208 02:01:04.612313 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.612321 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:04.612328 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:04.612389 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:04.637981 1055021 cri.go:89] found id: ""
	I1208 02:01:04.638006 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.638016 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:04.638023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:04.638083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:04.666122 1055021 cri.go:89] found id: ""
	I1208 02:01:04.666150 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.666159 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:04.666166 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:04.666228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:04.691775 1055021 cri.go:89] found id: ""
	I1208 02:01:04.691799 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.691807 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:04.691813 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:04.691877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:04.716584 1055021 cri.go:89] found id: ""
	I1208 02:01:04.716610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.716619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:04.716626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:04.716684 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:04.741247 1055021 cri.go:89] found id: ""
	I1208 02:01:04.741284 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.741297 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:04.741303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:04.741394 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:04.777041 1055021 cri.go:89] found id: ""
	I1208 02:01:04.777070 1055021 logs.go:282] 0 containers: []
	W1208 02:01:04.777079 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:04.777088 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:04.777100 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:04.797448 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:04.797478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:04.865442 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:04.857067    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.857546    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859247    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.859837    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:04.861441    9370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:04.865465 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:04.865478 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:04.893232 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:04.893270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:04.921152 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:04.921183 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.486177 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:07.496522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:07.496608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:07.521126 1055021 cri.go:89] found id: ""
	I1208 02:01:07.521202 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.521226 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:07.521244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:07.521333 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:07.549393 1055021 cri.go:89] found id: ""
	I1208 02:01:07.549458 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.549483 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:07.549501 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:07.549585 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:07.575624 1055021 cri.go:89] found id: ""
	I1208 02:01:07.575699 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.575715 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:07.575722 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:07.575784 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:07.604231 1055021 cri.go:89] found id: ""
	I1208 02:01:07.604296 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.604310 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:07.604317 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:07.604377 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:07.629146 1055021 cri.go:89] found id: ""
	I1208 02:01:07.629177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.629186 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:07.629192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:07.629267 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:07.654573 1055021 cri.go:89] found id: ""
	I1208 02:01:07.654598 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.654607 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:07.654614 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:07.654682 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:07.679672 1055021 cri.go:89] found id: ""
	I1208 02:01:07.679746 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.679762 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:07.679769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:07.679841 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:07.705327 1055021 cri.go:89] found id: ""
	I1208 02:01:07.705353 1055021 logs.go:282] 0 containers: []
	W1208 02:01:07.705362 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:07.705371 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:07.705386 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:07.770583 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:07.770665 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:07.788444 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:07.788473 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:07.862214 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:07.853643    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.854317    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.855951    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.856476    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:07.858120    9486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:07.862236 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:07.862248 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:07.891006 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:07.891043 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.422919 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:10.433424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:10.433496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:10.458269 1055021 cri.go:89] found id: ""
	I1208 02:01:10.458295 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.458303 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:10.458319 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:10.458397 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:10.485114 1055021 cri.go:89] found id: ""
	I1208 02:01:10.485138 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.485146 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:10.485152 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:10.485211 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:10.512785 1055021 cri.go:89] found id: ""
	I1208 02:01:10.512808 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.512817 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:10.512823 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:10.512884 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:10.538032 1055021 cri.go:89] found id: ""
	I1208 02:01:10.538057 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.538066 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:10.538072 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:10.538130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:10.568288 1055021 cri.go:89] found id: ""
	I1208 02:01:10.568311 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.568364 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:10.568379 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:10.568445 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:10.593987 1055021 cri.go:89] found id: ""
	I1208 02:01:10.594012 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.594021 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:10.594028 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:10.594087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:10.619212 1055021 cri.go:89] found id: ""
	I1208 02:01:10.619237 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.619245 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:10.619251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:10.619311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:10.645349 1055021 cri.go:89] found id: ""
	I1208 02:01:10.645384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:10.645393 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:10.645402 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:10.645414 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:10.707691 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:10.698979    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.699814    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701331    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.701914    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:10.703826    9594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:10.707713 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:10.707726 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:10.735113 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:10.735148 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:10.768113 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:10.768142 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:10.843634 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:10.843672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.362994 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:13.373991 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:13.374082 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:13.400090 1055021 cri.go:89] found id: ""
	I1208 02:01:13.400127 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.400136 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:13.400143 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:13.400212 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:13.425846 1055021 cri.go:89] found id: ""
	I1208 02:01:13.425872 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.425881 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:13.425887 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:13.425949 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:13.451450 1055021 cri.go:89] found id: ""
	I1208 02:01:13.451478 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.451487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:13.451493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:13.451554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:13.476315 1055021 cri.go:89] found id: ""
	I1208 02:01:13.476341 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.476350 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:13.476357 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:13.476419 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:13.503320 1055021 cri.go:89] found id: ""
	I1208 02:01:13.503346 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.503355 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:13.503362 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:13.503430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:13.528258 1055021 cri.go:89] found id: ""
	I1208 02:01:13.528290 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.528299 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:13.528306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:13.528375 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:13.553751 1055021 cri.go:89] found id: ""
	I1208 02:01:13.553784 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.553794 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:13.553800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:13.553871 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:13.580159 1055021 cri.go:89] found id: ""
	I1208 02:01:13.580183 1055021 logs.go:282] 0 containers: []
	W1208 02:01:13.580192 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:13.580200 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:13.580212 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:13.649628 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:13.649678 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:13.668358 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:13.668451 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:13.739767 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:13.731508    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.732248    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.733751    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.734334    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:13.735930    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:13.739835 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:13.739881 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:13.771646 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:13.771684 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.306613 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:16.317302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:16.317372 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:16.343331 1055021 cri.go:89] found id: ""
	I1208 02:01:16.343356 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.343365 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:16.343374 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:16.343433 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:16.369486 1055021 cri.go:89] found id: ""
	I1208 02:01:16.369507 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.369516 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:16.369522 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:16.369589 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:16.394887 1055021 cri.go:89] found id: ""
	I1208 02:01:16.394911 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.394919 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:16.394926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:16.394983 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:16.419429 1055021 cri.go:89] found id: ""
	I1208 02:01:16.419453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.419461 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:16.419467 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:16.419532 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:16.447941 1055021 cri.go:89] found id: ""
	I1208 02:01:16.448014 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.448038 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:16.448060 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:16.448137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:16.477380 1055021 cri.go:89] found id: ""
	I1208 02:01:16.477404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.477414 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:16.477420 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:16.477479 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:16.502633 1055021 cri.go:89] found id: ""
	I1208 02:01:16.502658 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.502667 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:16.502674 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:16.502776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:16.532861 1055021 cri.go:89] found id: ""
	I1208 02:01:16.532886 1055021 logs.go:282] 0 containers: []
	W1208 02:01:16.532895 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:16.532904 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:16.532943 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:16.561207 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:16.561235 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:16.629585 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:16.629623 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:16.647847 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:16.647876 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:16.713384 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:16.705178    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.705807    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.707467    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.708030    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:16.709480    9833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:16.713404 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:16.713417 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.242742 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:19.253432 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:19.253496 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:19.282053 1055021 cri.go:89] found id: ""
	I1208 02:01:19.282075 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.282091 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:19.282097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:19.282154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:19.317196 1055021 cri.go:89] found id: ""
	I1208 02:01:19.317218 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.317226 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:19.317232 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:19.317291 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:19.344133 1055021 cri.go:89] found id: ""
	I1208 02:01:19.344155 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.344164 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:19.344170 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:19.344231 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:19.369544 1055021 cri.go:89] found id: ""
	I1208 02:01:19.369567 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.369576 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:19.369582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:19.369641 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:19.394138 1055021 cri.go:89] found id: ""
	I1208 02:01:19.394161 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.394170 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:19.394176 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:19.394234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:19.421882 1055021 cri.go:89] found id: ""
	I1208 02:01:19.421906 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.421915 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:19.421921 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:19.421991 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:19.447254 1055021 cri.go:89] found id: ""
	I1208 02:01:19.447280 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.447289 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:19.447295 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:19.447359 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:19.471872 1055021 cri.go:89] found id: ""
	I1208 02:01:19.471898 1055021 logs.go:282] 0 containers: []
	W1208 02:01:19.471907 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:19.471916 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:19.471929 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:19.537545 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:19.537583 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:19.556105 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:19.556134 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:19.617255 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:19.609285    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.609703    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611246    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.611578    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:19.613126    9938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:19.617275 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:19.617288 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:19.645378 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:19.645413 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.176988 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:22.187407 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:22.187482 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:22.216526 1055021 cri.go:89] found id: ""
	I1208 02:01:22.216551 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.216560 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:22.216567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:22.216629 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:22.241409 1055021 cri.go:89] found id: ""
	I1208 02:01:22.241437 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.241446 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:22.241452 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:22.241510 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:22.275844 1055021 cri.go:89] found id: ""
	I1208 02:01:22.275873 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.275882 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:22.275888 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:22.275951 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:22.304532 1055021 cri.go:89] found id: ""
	I1208 02:01:22.304560 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.304575 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:22.304582 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:22.304640 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:22.347626 1055021 cri.go:89] found id: ""
	I1208 02:01:22.347653 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.347663 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:22.347669 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:22.347730 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:22.374178 1055021 cri.go:89] found id: ""
	I1208 02:01:22.374205 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.374215 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:22.374221 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:22.374280 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:22.404202 1055021 cri.go:89] found id: ""
	I1208 02:01:22.404229 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.404238 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:22.404244 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:22.404311 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:22.429827 1055021 cri.go:89] found id: ""
	I1208 02:01:22.429852 1055021 logs.go:282] 0 containers: []
	W1208 02:01:22.429861 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:22.429869 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:22.429880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:22.461216 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:22.461241 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:22.529595 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:22.529634 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:22.547808 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:22.547841 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:22.614795 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:22.606612   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.607490   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609064   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.609389   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:22.610908   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:22.614824 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:22.614836 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.143485 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:25.154329 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:25.154413 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:25.180079 1055021 cri.go:89] found id: ""
	I1208 02:01:25.180105 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.180114 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:25.180121 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:25.180180 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:25.204723 1055021 cri.go:89] found id: ""
	I1208 02:01:25.204753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.204761 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:25.204768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:25.204825 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:25.229571 1055021 cri.go:89] found id: ""
	I1208 02:01:25.229596 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.229604 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:25.229611 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:25.229669 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:25.256859 1055021 cri.go:89] found id: ""
	I1208 02:01:25.256888 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.256896 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:25.256903 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:25.256966 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:25.286130 1055021 cri.go:89] found id: ""
	I1208 02:01:25.286159 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.286169 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:25.286175 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:25.286240 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:25.316764 1055021 cri.go:89] found id: ""
	I1208 02:01:25.316797 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.316806 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:25.316819 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:25.316888 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:25.343685 1055021 cri.go:89] found id: ""
	I1208 02:01:25.343753 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.343781 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:25.343795 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:25.343874 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:25.368793 1055021 cri.go:89] found id: ""
	I1208 02:01:25.368819 1055021 logs.go:282] 0 containers: []
	W1208 02:01:25.368828 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:25.368864 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:25.368882 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:25.386567 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:25.386594 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:25.454148 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:25.445339   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.446127   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448558   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.448949   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:25.450191   10163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:25.454180 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:25.454193 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:25.482372 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:25.482406 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:25.512534 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:25.512561 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.077014 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:28.087810 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:28.087929 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:28.117064 1055021 cri.go:89] found id: ""
	I1208 02:01:28.117090 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.117100 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:28.117107 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:28.117166 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:28.142720 1055021 cri.go:89] found id: ""
	I1208 02:01:28.142747 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.142756 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:28.142763 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:28.142820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:28.169323 1055021 cri.go:89] found id: ""
	I1208 02:01:28.169349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.169357 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:28.169364 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:28.169423 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:28.198413 1055021 cri.go:89] found id: ""
	I1208 02:01:28.198441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.198450 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:28.198456 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:28.198538 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:28.222900 1055021 cri.go:89] found id: ""
	I1208 02:01:28.222925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.222935 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:28.222941 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:28.223006 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:28.252429 1055021 cri.go:89] found id: ""
	I1208 02:01:28.252453 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.252462 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:28.252468 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:28.252528 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:28.285260 1055021 cri.go:89] found id: ""
	I1208 02:01:28.285287 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.285296 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:28.285302 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:28.285362 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:28.322093 1055021 cri.go:89] found id: ""
	I1208 02:01:28.322122 1055021 logs.go:282] 0 containers: []
	W1208 02:01:28.322131 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:28.322140 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:28.322151 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:28.358086 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:28.358113 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:28.422767 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:28.422811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:28.441151 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:28.441185 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:28.510892 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:28.502089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.502678   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.504486   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.505089   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:28.506662   10289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:28.510919 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:28.510932 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.041345 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:31.056282 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:31.056357 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:31.087982 1055021 cri.go:89] found id: ""
	I1208 02:01:31.088007 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.088017 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:31.088023 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:31.088086 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:31.113983 1055021 cri.go:89] found id: ""
	I1208 02:01:31.114005 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.114014 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:31.114025 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:31.114083 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:31.141045 1055021 cri.go:89] found id: ""
	I1208 02:01:31.141069 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.141078 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:31.141085 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:31.141154 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:31.167841 1055021 cri.go:89] found id: ""
	I1208 02:01:31.167864 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.167873 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:31.167880 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:31.167937 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:31.193449 1055021 cri.go:89] found id: ""
	I1208 02:01:31.193471 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.193479 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:31.193485 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:31.193542 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:31.220825 1055021 cri.go:89] found id: ""
	I1208 02:01:31.220850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.220859 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:31.220865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:31.220926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:31.246036 1055021 cri.go:89] found id: ""
	I1208 02:01:31.246063 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.246071 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:31.246077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:31.246140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:31.282360 1055021 cri.go:89] found id: ""
	I1208 02:01:31.282388 1055021 logs.go:282] 0 containers: []
	W1208 02:01:31.282396 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:31.282405 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:31.282416 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:31.351320 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:31.351368 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:31.370774 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:31.370887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:31.434743 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:31.426605   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.427309   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.428851   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.429326   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:31.430831   10391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:31.434763 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:31.434775 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:31.462946 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:31.462982 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:33.992261 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:34.004797 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:34.004891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:34.044483 1055021 cri.go:89] found id: ""
	I1208 02:01:34.044506 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.044516 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:34.044523 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:34.044598 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:34.072528 1055021 cri.go:89] found id: ""
	I1208 02:01:34.072564 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.072573 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:34.072580 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:34.072654 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:34.102278 1055021 cri.go:89] found id: ""
	I1208 02:01:34.102357 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.102379 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:34.102399 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:34.102487 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:34.129526 1055021 cri.go:89] found id: ""
	I1208 02:01:34.129601 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.129634 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:34.129656 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:34.129776 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:34.155663 1055021 cri.go:89] found id: ""
	I1208 02:01:34.155689 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.155698 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:34.155704 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:34.155777 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:34.186951 1055021 cri.go:89] found id: ""
	I1208 02:01:34.186978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.186988 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:34.186996 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:34.187104 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:34.212379 1055021 cri.go:89] found id: ""
	I1208 02:01:34.212404 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.212423 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:34.212430 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:34.212489 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:34.238401 1055021 cri.go:89] found id: ""
	I1208 02:01:34.238438 1055021 logs.go:282] 0 containers: []
	W1208 02:01:34.238447 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:34.238456 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:34.238468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:34.278895 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:34.278970 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:34.356262 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:34.356303 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:34.376513 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:34.376545 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:34.447804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:34.439154   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.439768   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441421   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.441958   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:34.443514   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:34.447829 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:34.447843 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:36.976756 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:36.987574 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:36.987651 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:37.035351 1055021 cri.go:89] found id: ""
	I1208 02:01:37.035376 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.035386 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:37.035393 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:37.035457 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:37.065004 1055021 cri.go:89] found id: ""
	I1208 02:01:37.065026 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.065034 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:37.065041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:37.065099 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:37.092804 1055021 cri.go:89] found id: ""
	I1208 02:01:37.092828 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.092837 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:37.092843 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:37.092901 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:37.117820 1055021 cri.go:89] found id: ""
	I1208 02:01:37.117849 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.117857 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:37.117865 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:37.117924 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:37.143955 1055021 cri.go:89] found id: ""
	I1208 02:01:37.143978 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.143987 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:37.143993 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:37.144055 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:37.173740 1055021 cri.go:89] found id: ""
	I1208 02:01:37.173764 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.173772 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:37.173779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:37.173838 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:37.202687 1055021 cri.go:89] found id: ""
	I1208 02:01:37.202710 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.202719 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:37.202725 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:37.202786 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:37.229307 1055021 cri.go:89] found id: ""
	I1208 02:01:37.229331 1055021 logs.go:282] 0 containers: []
	W1208 02:01:37.229339 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:37.229347 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:37.229360 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:37.247500 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:37.247530 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:37.329229 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:37.320604   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.321402   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323081   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.323574   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:37.325159   10607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:37.329252 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:37.329267 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:37.358197 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:37.358238 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:37.387860 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:37.387889 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:39.956266 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:39.966752 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:39.966823 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:39.991660 1055021 cri.go:89] found id: ""
	I1208 02:01:39.991686 1055021 logs.go:282] 0 containers: []
	W1208 02:01:39.991695 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:39.991701 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:39.991763 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:40.027823 1055021 cri.go:89] found id: ""
	I1208 02:01:40.027905 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.027928 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:40.027949 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:40.028063 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:40.064388 1055021 cri.go:89] found id: ""
	I1208 02:01:40.064464 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.064487 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:40.064508 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:40.064594 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:40.094787 1055021 cri.go:89] found id: ""
	I1208 02:01:40.094814 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.094832 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:40.094858 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:40.094922 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:40.120620 1055021 cri.go:89] found id: ""
	I1208 02:01:40.120645 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.120654 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:40.120660 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:40.120720 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:40.153070 1055021 cri.go:89] found id: ""
	I1208 02:01:40.153097 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.153106 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:40.153112 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:40.153183 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:40.181896 1055021 cri.go:89] found id: ""
	I1208 02:01:40.181925 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.181935 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:40.181942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:40.182004 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:40.209414 1055021 cri.go:89] found id: ""
	I1208 02:01:40.209441 1055021 logs.go:282] 0 containers: []
	W1208 02:01:40.209450 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:40.209459 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:40.209470 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:40.274756 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:40.274858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:40.294225 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:40.294364 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:40.365754 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:40.357329   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.357838   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.359579   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.360172   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:40.361801   10729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:40.365778 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:40.365791 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:40.394699 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:40.394732 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:42.924136 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:42.934800 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:42.934894 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:42.961825 1055021 cri.go:89] found id: ""
	I1208 02:01:42.961850 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.961859 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:42.961867 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:42.961927 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:42.988379 1055021 cri.go:89] found id: ""
	I1208 02:01:42.988403 1055021 logs.go:282] 0 containers: []
	W1208 02:01:42.988412 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:42.988418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:42.988503 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:43.023024 1055021 cri.go:89] found id: ""
	I1208 02:01:43.023047 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.023056 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:43.023063 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:43.023139 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:43.057964 1055021 cri.go:89] found id: ""
	I1208 02:01:43.057993 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.058001 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:43.058008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:43.058073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:43.088198 1055021 cri.go:89] found id: ""
	I1208 02:01:43.088221 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.088229 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:43.088235 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:43.088295 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:43.116924 1055021 cri.go:89] found id: ""
	I1208 02:01:43.116950 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.116959 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:43.116965 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:43.117042 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:43.143043 1055021 cri.go:89] found id: ""
	I1208 02:01:43.143156 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.143172 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:43.143180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:43.143274 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:43.172524 1055021 cri.go:89] found id: ""
	I1208 02:01:43.172547 1055021 logs.go:282] 0 containers: []
	W1208 02:01:43.172556 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:43.172565 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:43.172577 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:43.237127 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:43.237162 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:43.256485 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:43.256516 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:43.325704 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:43.315990   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.316748   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.319965   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.320783   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:43.321894   10839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:43.325725 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:43.325737 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:43.354439 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:43.354477 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:45.885598 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:45.896346 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:45.896416 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:45.921473 1055021 cri.go:89] found id: ""
	I1208 02:01:45.921499 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.921508 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:45.921515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:45.921576 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:45.945701 1055021 cri.go:89] found id: ""
	I1208 02:01:45.945725 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.945734 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:45.945740 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:45.945800 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:45.973191 1055021 cri.go:89] found id: ""
	I1208 02:01:45.973213 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.973222 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:45.973228 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:45.973289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:45.999665 1055021 cri.go:89] found id: ""
	I1208 02:01:45.999741 1055021 logs.go:282] 0 containers: []
	W1208 02:01:45.999764 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:45.999782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:45.999872 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:46.041104 1055021 cri.go:89] found id: ""
	I1208 02:01:46.041176 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.041202 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:46.041224 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:46.041300 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:46.076259 1055021 cri.go:89] found id: ""
	I1208 02:01:46.076332 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.076355 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:46.076373 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:46.076450 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:46.108098 1055021 cri.go:89] found id: ""
	I1208 02:01:46.108163 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.108179 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:46.108186 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:46.108247 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:46.134928 1055021 cri.go:89] found id: ""
	I1208 02:01:46.134964 1055021 logs.go:282] 0 containers: []
	W1208 02:01:46.134974 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:46.134983 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:46.134995 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:46.164421 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:46.164498 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:46.233311 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:46.233358 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:46.253422 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:46.253502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:46.336577 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:46.328021   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.328654   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330243   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.330820   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:46.332621   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:46.336600 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:46.336614 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:48.865787 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:48.876567 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:48.876642 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:48.901147 1055021 cri.go:89] found id: ""
	I1208 02:01:48.901177 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.901185 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:48.901192 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:48.901250 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:48.927326 1055021 cri.go:89] found id: ""
	I1208 02:01:48.927351 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.927360 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:48.927366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:48.927424 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:48.951970 1055021 cri.go:89] found id: ""
	I1208 02:01:48.951994 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.952003 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:48.952009 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:48.952073 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:48.976700 1055021 cri.go:89] found id: ""
	I1208 02:01:48.976724 1055021 logs.go:282] 0 containers: []
	W1208 02:01:48.976732 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:48.976739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:48.976796 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:49.005321 1055021 cri.go:89] found id: ""
	I1208 02:01:49.005349 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.005359 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:49.005366 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:49.005432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:49.045336 1055021 cri.go:89] found id: ""
	I1208 02:01:49.045359 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.045368 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:49.045397 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:49.045478 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:49.074970 1055021 cri.go:89] found id: ""
	I1208 02:01:49.074997 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.075006 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:49.075012 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:49.075070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:49.100757 1055021 cri.go:89] found id: ""
	I1208 02:01:49.100780 1055021 logs.go:282] 0 containers: []
	W1208 02:01:49.100788 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:49.100796 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:49.100808 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:49.165827 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:49.165862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:49.183539 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:49.183618 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:49.249850 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:49.241597   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.242194   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.243736   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.244335   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:49.245906   11059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:49.249874 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:49.249887 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:49.280238 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:49.280270 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:51.819515 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:51.830251 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:51.830329 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:51.856077 1055021 cri.go:89] found id: ""
	I1208 02:01:51.856098 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.856107 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:51.856113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:51.856170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:51.882057 1055021 cri.go:89] found id: ""
	I1208 02:01:51.882086 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.882096 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:51.882103 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:51.882170 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:51.908531 1055021 cri.go:89] found id: ""
	I1208 02:01:51.908572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.908582 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:51.908588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:51.908649 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:51.933571 1055021 cri.go:89] found id: ""
	I1208 02:01:51.933594 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.933603 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:51.933610 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:51.933671 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:51.959716 1055021 cri.go:89] found id: ""
	I1208 02:01:51.959777 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.959800 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:51.959825 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:51.959903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:51.985320 1055021 cri.go:89] found id: ""
	I1208 02:01:51.985384 1055021 logs.go:282] 0 containers: []
	W1208 02:01:51.985409 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:51.985427 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:51.985507 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:52.029640 1055021 cri.go:89] found id: ""
	I1208 02:01:52.029709 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.029736 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:52.029756 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:52.029835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:52.060725 1055021 cri.go:89] found id: ""
	I1208 02:01:52.060803 1055021 logs.go:282] 0 containers: []
	W1208 02:01:52.060826 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:52.060848 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:52.060874 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:52.129431 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:52.129468 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:52.148064 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:52.148095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:52.220103 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:52.212032   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.212805   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214364   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.214666   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:52.216211   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:52.220125 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:52.220137 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:52.248853 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:52.248892 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:54.781319 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:54.791942 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:54.792009 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:54.816799 1055021 cri.go:89] found id: ""
	I1208 02:01:54.816821 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.816830 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:54.816835 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:54.816893 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:54.846002 1055021 cri.go:89] found id: ""
	I1208 02:01:54.846028 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.846036 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:54.846043 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:54.846101 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:54.870704 1055021 cri.go:89] found id: ""
	I1208 02:01:54.870729 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.870737 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:54.870744 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:54.870807 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:54.897236 1055021 cri.go:89] found id: ""
	I1208 02:01:54.897302 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.897327 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:54.897347 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:54.897432 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:54.921729 1055021 cri.go:89] found id: ""
	I1208 02:01:54.921754 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.921763 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:54.921769 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:54.921830 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:54.949586 1055021 cri.go:89] found id: ""
	I1208 02:01:54.949610 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.949619 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:54.949626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:54.949687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:54.976595 1055021 cri.go:89] found id: ""
	I1208 02:01:54.976618 1055021 logs.go:282] 0 containers: []
	W1208 02:01:54.976627 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:54.976633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:54.976708 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:55.012149 1055021 cri.go:89] found id: ""
	I1208 02:01:55.012179 1055021 logs.go:282] 0 containers: []
	W1208 02:01:55.012188 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:55.012198 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:55.012211 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:55.089182 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:55.089225 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:55.107781 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:55.107811 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:55.175880 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:55.166637   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.167327   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.168872   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.170160   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:55.171745   11286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:55.175942 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:55.175962 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:55.205060 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:55.205095 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:01:57.733634 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:01:57.744236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:01:57.744308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:01:57.769149 1055021 cri.go:89] found id: ""
	I1208 02:01:57.769173 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.769182 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:01:57.769188 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:01:57.769246 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:01:57.796831 1055021 cri.go:89] found id: ""
	I1208 02:01:57.796860 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.796869 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:01:57.796876 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:01:57.796932 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:01:57.821809 1055021 cri.go:89] found id: ""
	I1208 02:01:57.821834 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.821844 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:01:57.821850 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:01:57.821917 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:01:57.849385 1055021 cri.go:89] found id: ""
	I1208 02:01:57.849410 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.849418 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:01:57.849424 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:01:57.849481 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:01:57.874645 1055021 cri.go:89] found id: ""
	I1208 02:01:57.874669 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.874678 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:01:57.874684 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:01:57.874742 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:01:57.899500 1055021 cri.go:89] found id: ""
	I1208 02:01:57.899572 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.899608 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:01:57.899623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:01:57.899695 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:01:57.926677 1055021 cri.go:89] found id: ""
	I1208 02:01:57.926711 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.926720 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:01:57.926727 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:01:57.926833 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:01:57.952159 1055021 cri.go:89] found id: ""
	I1208 02:01:57.952233 1055021 logs.go:282] 0 containers: []
	W1208 02:01:57.952249 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:01:57.952259 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:01:57.952271 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:01:58.017945 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:01:58.018082 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:01:58.036702 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:01:58.036877 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:01:58.109217 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:01:58.100508   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.101372   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103186   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.103612   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:01:58.105255   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:01:58.109239 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:01:58.109252 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:01:58.137424 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:01:58.137460 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:00.669211 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:00.679729 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:00.679803 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:00.704116 1055021 cri.go:89] found id: ""
	I1208 02:02:00.704140 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.704149 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:00.704156 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:00.704220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:00.728883 1055021 cri.go:89] found id: ""
	I1208 02:02:00.728908 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.728917 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:00.728923 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:00.728984 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:00.757361 1055021 cri.go:89] found id: ""
	I1208 02:02:00.757437 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.757453 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:00.757461 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:00.757523 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:00.784303 1055021 cri.go:89] found id: ""
	I1208 02:02:00.784332 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.784342 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:00.784349 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:00.784420 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:00.814794 1055021 cri.go:89] found id: ""
	I1208 02:02:00.814818 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.814827 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:00.814833 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:00.814915 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:00.840985 1055021 cri.go:89] found id: ""
	I1208 02:02:00.841052 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.841069 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:00.841077 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:00.841140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:00.869242 1055021 cri.go:89] found id: ""
	I1208 02:02:00.869268 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.869277 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:00.869283 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:00.869348 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:00.895515 1055021 cri.go:89] found id: ""
	I1208 02:02:00.895540 1055021 logs.go:282] 0 containers: []
	W1208 02:02:00.895549 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:00.895557 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:00.895600 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:00.963574 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:00.963611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:00.981868 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:00.981900 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:01.074452 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:01.063559   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.065215   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.066010   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.067881   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:01.068492   11525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:01.074541 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:01.074602 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:01.107635 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:01.107672 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:03.643395 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:03.654301 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:03.654370 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:03.680571 1055021 cri.go:89] found id: ""
	I1208 02:02:03.680609 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.680619 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:03.680626 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:03.680696 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:03.709419 1055021 cri.go:89] found id: ""
	I1208 02:02:03.709444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.709453 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:03.709459 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:03.709518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:03.736028 1055021 cri.go:89] found id: ""
	I1208 02:02:03.736064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.736073 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:03.736079 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:03.736140 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:03.760906 1055021 cri.go:89] found id: ""
	I1208 02:02:03.760983 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.761005 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:03.761019 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:03.761095 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:03.789527 1055021 cri.go:89] found id: ""
	I1208 02:02:03.789563 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.789572 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:03.789578 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:03.789655 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:03.817176 1055021 cri.go:89] found id: ""
	I1208 02:02:03.817203 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.817211 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:03.817218 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:03.817277 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:03.847025 1055021 cri.go:89] found id: ""
	I1208 02:02:03.847053 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.847063 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:03.847070 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:03.847161 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:03.872945 1055021 cri.go:89] found id: ""
	I1208 02:02:03.872972 1055021 logs.go:282] 0 containers: []
	W1208 02:02:03.872981 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:03.872990 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:03.873002 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:03.938890 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:03.938927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:03.956669 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:03.956699 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:04.047856 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:04.037014   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.037571   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.040749   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.041545   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:04.043375   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:04.047931 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:04.047960 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:04.084291 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:04.084328 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:06.621579 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:06.632180 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:06.632262 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:06.658187 1055021 cri.go:89] found id: ""
	I1208 02:02:06.658214 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.658223 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:06.658230 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:06.658289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:06.683455 1055021 cri.go:89] found id: ""
	I1208 02:02:06.683479 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.683487 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:06.683494 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:06.683555 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:06.709121 1055021 cri.go:89] found id: ""
	I1208 02:02:06.709147 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.709156 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:06.709162 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:06.709220 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:06.735601 1055021 cri.go:89] found id: ""
	I1208 02:02:06.735639 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.735649 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:06.735655 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:06.735717 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:06.761793 1055021 cri.go:89] found id: ""
	I1208 02:02:06.761817 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.761826 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:06.761832 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:06.761891 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:06.787053 1055021 cri.go:89] found id: ""
	I1208 02:02:06.787075 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.787092 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:06.787099 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:06.787168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:06.815964 1055021 cri.go:89] found id: ""
	I1208 02:02:06.815990 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.815999 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:06.816006 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:06.816067 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:06.841508 1055021 cri.go:89] found id: ""
	I1208 02:02:06.841534 1055021 logs.go:282] 0 containers: []
	W1208 02:02:06.841543 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:06.841552 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:06.841564 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:06.906588 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:06.906627 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:06.925347 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:06.925380 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:07.004820 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:06.993318   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.993822   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.995400   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.996041   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:06.997768   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:07.004851 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:07.004865 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:07.038308 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:07.038348 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.573053 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:09.583792 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:09.583864 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:09.611232 1055021 cri.go:89] found id: ""
	I1208 02:02:09.611255 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.611265 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:09.611271 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:09.611340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:09.636029 1055021 cri.go:89] found id: ""
	I1208 02:02:09.636054 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.636063 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:09.636069 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:09.636127 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:09.662307 1055021 cri.go:89] found id: ""
	I1208 02:02:09.662334 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.662344 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:09.662350 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:09.662430 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:09.688279 1055021 cri.go:89] found id: ""
	I1208 02:02:09.688304 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.688314 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:09.688320 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:09.688385 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:09.717056 1055021 cri.go:89] found id: ""
	I1208 02:02:09.717081 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.717090 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:09.717097 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:09.717206 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:09.745719 1055021 cri.go:89] found id: ""
	I1208 02:02:09.745744 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.745753 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:09.745760 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:09.745820 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:09.774995 1055021 cri.go:89] found id: ""
	I1208 02:02:09.775020 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.775029 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:09.775035 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:09.775107 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:09.800142 1055021 cri.go:89] found id: ""
	I1208 02:02:09.800165 1055021 logs.go:282] 0 containers: []
	W1208 02:02:09.800174 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:09.800183 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:09.800196 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:09.817474 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:09.817504 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:09.881166 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:09.872512   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.873287   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.874867   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.875236   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:09.876791   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:09.881188 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:09.881201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:09.909282 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:09.909316 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:09.936890 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:09.936917 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:12.504767 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:12.517010 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:12.517087 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:12.552375 1055021 cri.go:89] found id: ""
	I1208 02:02:12.552405 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.552414 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:12.552421 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:12.552484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:12.581970 1055021 cri.go:89] found id: ""
	I1208 02:02:12.581993 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.582002 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:12.582008 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:12.582070 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:12.609191 1055021 cri.go:89] found id: ""
	I1208 02:02:12.609215 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.609223 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:12.609229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:12.609289 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:12.634872 1055021 cri.go:89] found id: ""
	I1208 02:02:12.634900 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.634909 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:12.634917 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:12.634977 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:12.660600 1055021 cri.go:89] found id: ""
	I1208 02:02:12.660622 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.660631 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:12.660637 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:12.660698 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:12.686371 1055021 cri.go:89] found id: ""
	I1208 02:02:12.686394 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.686402 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:12.686409 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:12.686468 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:12.711549 1055021 cri.go:89] found id: ""
	I1208 02:02:12.711574 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.711583 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:12.711589 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:12.711650 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:12.736572 1055021 cri.go:89] found id: ""
	I1208 02:02:12.736599 1055021 logs.go:282] 0 containers: []
	W1208 02:02:12.736609 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:12.736619 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:12.736631 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:12.754919 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:12.754947 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:12.825472 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:12.816868   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.817642   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819376   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.819968   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:12.821563   11976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:12.825494 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:12.825508 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:12.854189 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:12.854226 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:12.881205 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:12.881233 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:15.446588 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:15.457588 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:15.457660 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:15.482738 1055021 cri.go:89] found id: ""
	I1208 02:02:15.482763 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.482772 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:15.482779 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:15.482877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:15.511332 1055021 cri.go:89] found id: ""
	I1208 02:02:15.511364 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.511373 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:15.511380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:15.511446 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:15.555502 1055021 cri.go:89] found id: ""
	I1208 02:02:15.555528 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.555537 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:15.555543 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:15.555604 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:15.584568 1055021 cri.go:89] found id: ""
	I1208 02:02:15.584590 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.584598 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:15.584604 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:15.584662 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:15.613196 1055021 cri.go:89] found id: ""
	I1208 02:02:15.613219 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.613228 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:15.613234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:15.613299 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:15.642375 1055021 cri.go:89] found id: ""
	I1208 02:02:15.642396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.642404 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:15.642411 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:15.642469 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:15.666701 1055021 cri.go:89] found id: ""
	I1208 02:02:15.666724 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.666733 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:15.666739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:15.666804 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:15.694203 1055021 cri.go:89] found id: ""
	I1208 02:02:15.694226 1055021 logs.go:282] 0 containers: []
	W1208 02:02:15.694235 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:15.694244 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:15.694256 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:15.711985 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:15.712018 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:15.783845 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:15.774451   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.775376   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.776679   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.777881   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:15.778666   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:15.783867 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:15.783880 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:15.812138 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:15.812172 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:15.841785 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:15.841815 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.407879 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:18.418616 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:18.418687 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:18.452125 1055021 cri.go:89] found id: ""
	I1208 02:02:18.452149 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.452158 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:18.452165 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:18.452226 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:18.484590 1055021 cri.go:89] found id: ""
	I1208 02:02:18.484618 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.484627 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:18.484633 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:18.484693 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:18.521073 1055021 cri.go:89] found id: ""
	I1208 02:02:18.521101 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.521111 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:18.521117 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:18.521195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:18.552106 1055021 cri.go:89] found id: ""
	I1208 02:02:18.552131 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.552142 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:18.552149 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:18.552234 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:18.583000 1055021 cri.go:89] found id: ""
	I1208 02:02:18.583026 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.583034 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:18.583041 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:18.583108 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:18.608873 1055021 cri.go:89] found id: ""
	I1208 02:02:18.608901 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.608909 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:18.608916 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:18.608975 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:18.638459 1055021 cri.go:89] found id: ""
	I1208 02:02:18.638482 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.638491 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:18.638497 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:18.638554 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:18.664652 1055021 cri.go:89] found id: ""
	I1208 02:02:18.664678 1055021 logs.go:282] 0 containers: []
	W1208 02:02:18.664687 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:18.664696 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:18.664708 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:18.727887 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:18.719423   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.720035   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.721843   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.722481   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:18.724057   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:18.727909 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:18.727922 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:18.756733 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:18.756768 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:18.784791 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:18.784819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:18.854704 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:18.854747 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.373144 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:21.384002 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:21.384076 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:21.408827 1055021 cri.go:89] found id: ""
	I1208 02:02:21.408851 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.408860 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:21.408866 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:21.408926 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:21.437335 1055021 cri.go:89] found id: ""
	I1208 02:02:21.437366 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.437375 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:21.437380 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:21.437440 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:21.461726 1055021 cri.go:89] found id: ""
	I1208 02:02:21.461753 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.461762 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:21.461768 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:21.461827 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:21.486068 1055021 cri.go:89] found id: ""
	I1208 02:02:21.486095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.486104 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:21.486110 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:21.486168 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:21.521646 1055021 cri.go:89] found id: ""
	I1208 02:02:21.521671 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.521679 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:21.521686 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:21.521754 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:21.549687 1055021 cri.go:89] found id: ""
	I1208 02:02:21.549714 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.549723 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:21.549730 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:21.549789 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:21.584524 1055021 cri.go:89] found id: ""
	I1208 02:02:21.584600 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.584615 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:21.584623 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:21.584686 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:21.613834 1055021 cri.go:89] found id: ""
	I1208 02:02:21.613859 1055021 logs.go:282] 0 containers: []
	W1208 02:02:21.613868 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:21.613877 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:21.613888 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:21.679269 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:21.679305 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:21.696894 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:21.696924 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:21.763490 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:21.755482   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.756150   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.757688   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.758238   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:21.759704   12317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:21.763525 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:21.763538 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:21.791788 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:21.791819 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.320943 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:24.332441 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:24.332511 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:24.359381 1055021 cri.go:89] found id: ""
	I1208 02:02:24.359403 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.359412 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:24.359418 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:24.359484 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:24.385766 1055021 cri.go:89] found id: ""
	I1208 02:02:24.385789 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.385798 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:24.385804 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:24.385870 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:24.412597 1055021 cri.go:89] found id: ""
	I1208 02:02:24.412619 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.412633 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:24.412640 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:24.412700 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:24.438239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.438262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.438270 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:24.438277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:24.438336 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:24.465529 1055021 cri.go:89] found id: ""
	I1208 02:02:24.465551 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.465560 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:24.465566 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:24.465628 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:24.490130 1055021 cri.go:89] found id: ""
	I1208 02:02:24.490153 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.490162 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:24.490168 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:24.490228 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:24.531239 1055021 cri.go:89] found id: ""
	I1208 02:02:24.531262 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.531271 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:24.531277 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:24.531335 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:24.570624 1055021 cri.go:89] found id: ""
	I1208 02:02:24.570646 1055021 logs.go:282] 0 containers: []
	W1208 02:02:24.570654 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:24.570663 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:24.570676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:24.588822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:24.588852 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:24.650804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:24.642875   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.643514   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645005   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.645504   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:24.647043   12426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:24.650826 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:24.650858 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:24.680022 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:24.680060 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:24.708316 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:24.708352 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.274217 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:27.287664 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:27.287788 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:27.318113 1055021 cri.go:89] found id: ""
	I1208 02:02:27.318193 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.318215 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:27.318234 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:27.318332 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:27.344915 1055021 cri.go:89] found id: ""
	I1208 02:02:27.344943 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.344951 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:27.344958 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:27.345024 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:27.374469 1055021 cri.go:89] found id: ""
	I1208 02:02:27.374502 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.374512 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:27.374519 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:27.374588 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:27.399626 1055021 cri.go:89] found id: ""
	I1208 02:02:27.399665 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.399674 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:27.399680 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:27.399753 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:27.429184 1055021 cri.go:89] found id: ""
	I1208 02:02:27.429222 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.429230 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:27.429236 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:27.429303 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:27.453872 1055021 cri.go:89] found id: ""
	I1208 02:02:27.453910 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.453919 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:27.453926 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:27.453996 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:27.479093 1055021 cri.go:89] found id: ""
	I1208 02:02:27.479117 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.479127 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:27.479134 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:27.479195 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:27.513793 1055021 cri.go:89] found id: ""
	I1208 02:02:27.513820 1055021 logs.go:282] 0 containers: []
	W1208 02:02:27.513840 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:27.513849 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:27.513862 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:27.543879 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:27.543958 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:27.585714 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:27.585783 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:27.651465 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:27.651502 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:27.669169 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:27.669201 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:27.732840 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:27.724142   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.724807   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.726505   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.727102   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:27.728819   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.233103 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:30.244434 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:30.244504 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:30.286359 1055021 cri.go:89] found id: ""
	I1208 02:02:30.286381 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.286390 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:30.286396 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:30.286455 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:30.317925 1055021 cri.go:89] found id: ""
	I1208 02:02:30.317947 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.317955 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:30.317960 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:30.318020 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:30.352522 1055021 cri.go:89] found id: ""
	I1208 02:02:30.352543 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.352551 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:30.352557 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:30.352619 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:30.376895 1055021 cri.go:89] found id: ""
	I1208 02:02:30.376917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.376925 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:30.376932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:30.376989 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:30.401457 1055021 cri.go:89] found id: ""
	I1208 02:02:30.401478 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.401487 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:30.401493 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:30.401551 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:30.428269 1055021 cri.go:89] found id: ""
	I1208 02:02:30.428291 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.428300 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:30.428306 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:30.428366 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:30.452846 1055021 cri.go:89] found id: ""
	I1208 02:02:30.452869 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.452878 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:30.452884 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:30.452946 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:30.477617 1055021 cri.go:89] found id: ""
	I1208 02:02:30.477645 1055021 logs.go:282] 0 containers: []
	W1208 02:02:30.477655 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:30.477665 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:30.477676 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:30.507758 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:30.507782 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:30.577724 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:30.577802 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:30.598108 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:30.598136 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:30.663869 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:30.655697   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.656422   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.657932   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.658322   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:30.659857   12663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:30.663892 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:30.663905 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.192012 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:33.202802 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:33.202903 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:33.229607 1055021 cri.go:89] found id: ""
	I1208 02:02:33.229629 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.229638 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:33.229645 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:33.229704 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:33.257802 1055021 cri.go:89] found id: ""
	I1208 02:02:33.257837 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.257847 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:33.257854 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:33.257913 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:33.289073 1055021 cri.go:89] found id: ""
	I1208 02:02:33.289095 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.289103 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:33.289113 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:33.289171 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:33.317039 1055021 cri.go:89] found id: ""
	I1208 02:02:33.317060 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.317069 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:33.317075 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:33.317137 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:33.342479 1055021 cri.go:89] found id: ""
	I1208 02:02:33.342500 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.342509 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:33.342515 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:33.342577 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:33.367849 1055021 cri.go:89] found id: ""
	I1208 02:02:33.367877 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.367886 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:33.367892 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:33.367950 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:33.393711 1055021 cri.go:89] found id: ""
	I1208 02:02:33.393739 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.393748 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:33.393755 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:33.393818 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:33.419264 1055021 cri.go:89] found id: ""
	I1208 02:02:33.419286 1055021 logs.go:282] 0 containers: []
	W1208 02:02:33.419295 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:33.419303 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:33.419320 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:33.446586 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:33.446620 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:33.474605 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:33.474633 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:33.546521 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:33.546562 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:33.567522 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:33.567553 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:33.633164 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:33.625102   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.625694   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627304   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.627685   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:33.629123   12781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.133387 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:36.145051 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:36.145130 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:36.178396 1055021 cri.go:89] found id: ""
	I1208 02:02:36.178426 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.178434 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:36.178442 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:36.178500 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:36.204662 1055021 cri.go:89] found id: ""
	I1208 02:02:36.204685 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.204694 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:36.204700 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:36.204758 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:36.233744 1055021 cri.go:89] found id: ""
	I1208 02:02:36.233766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.233776 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:36.233782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:36.233844 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:36.271413 1055021 cri.go:89] found id: ""
	I1208 02:02:36.271436 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.271445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:36.271453 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:36.271518 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:36.299867 1055021 cri.go:89] found id: ""
	I1208 02:02:36.299889 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.299898 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:36.299905 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:36.299967 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:36.333748 1055021 cri.go:89] found id: ""
	I1208 02:02:36.333771 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.333779 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:36.333786 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:36.333877 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:36.359920 1055021 cri.go:89] found id: ""
	I1208 02:02:36.359944 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.359953 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:36.359959 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:36.360016 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:36.384561 1055021 cri.go:89] found id: ""
	I1208 02:02:36.384583 1055021 logs.go:282] 0 containers: []
	W1208 02:02:36.384592 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:36.384600 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:36.384611 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:36.449118 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:36.449153 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:36.469510 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:36.469537 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:36.544911 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:36.536152   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.536884   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.538467   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.539071   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.540616   12878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:36.544934 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:36.544972 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:36.577604 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:36.577640 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.106569 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:39.117314 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:39.117406 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:39.147330 1055021 cri.go:89] found id: ""
	I1208 02:02:39.147354 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.147362 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:39.147369 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:39.147429 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:39.175702 1055021 cri.go:89] found id: ""
	I1208 02:02:39.175725 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.175733 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:39.175739 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:39.175797 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:39.209892 1055021 cri.go:89] found id: ""
	I1208 02:02:39.209917 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.209926 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:39.209932 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:39.209990 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:39.235210 1055021 cri.go:89] found id: ""
	I1208 02:02:39.235239 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.235248 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:39.235255 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:39.235312 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:39.268421 1055021 cri.go:89] found id: ""
	I1208 02:02:39.268444 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.268453 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:39.268460 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:39.268520 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:39.308045 1055021 cri.go:89] found id: ""
	I1208 02:02:39.308070 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.308079 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:39.308086 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:39.308152 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:39.338659 1055021 cri.go:89] found id: ""
	I1208 02:02:39.338684 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.338693 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:39.338699 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:39.338759 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:39.369373 1055021 cri.go:89] found id: ""
	I1208 02:02:39.369396 1055021 logs.go:282] 0 containers: []
	W1208 02:02:39.369405 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:39.369414 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:39.369426 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:39.401929 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:39.401959 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:39.466665 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:39.466705 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:39.484758 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:39.484786 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:39.570718 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:39.559011   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.559908   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561668   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.561977   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:39.566203   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:39.570737 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:39.570750 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.101949 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:42.135199 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:42.135361 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:42.190279 1055021 cri.go:89] found id: ""
	I1208 02:02:42.190367 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.190393 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:42.190415 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:42.190545 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:42.222777 1055021 cri.go:89] found id: ""
	I1208 02:02:42.222883 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.222911 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:42.222934 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:42.223043 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:42.257086 1055021 cri.go:89] found id: ""
	I1208 02:02:42.257169 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.257193 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:42.257217 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:42.257340 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:42.290338 1055021 cri.go:89] found id: ""
	I1208 02:02:42.290421 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.290445 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:42.290464 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:42.290571 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:42.321497 1055021 cri.go:89] found id: ""
	I1208 02:02:42.321567 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.321592 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:42.321612 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:42.321710 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:42.351037 1055021 cri.go:89] found id: ""
	I1208 02:02:42.351157 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.351184 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:42.351205 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:42.351308 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:42.377225 1055021 cri.go:89] found id: ""
	I1208 02:02:42.377251 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.377259 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:42.377266 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:42.377324 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:42.403038 1055021 cri.go:89] found id: ""
	I1208 02:02:42.403064 1055021 logs.go:282] 0 containers: []
	W1208 02:02:42.403073 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:42.403117 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:42.403130 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:42.468670 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:42.468709 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:42.486822 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:42.486906 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:42.576804 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:42.565177   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.565930   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.567626   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.568209   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:42.569865   13111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:42.576828 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:42.576844 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:42.609307 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:42.609345 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:45.139048 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:45.153298 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:45.153393 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:45.190816 1055021 cri.go:89] found id: ""
	I1208 02:02:45.190864 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.190874 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:45.190882 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:45.190954 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:45.248053 1055021 cri.go:89] found id: ""
	I1208 02:02:45.248087 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.248097 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:45.248105 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:45.248178 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:45.291403 1055021 cri.go:89] found id: ""
	I1208 02:02:45.291441 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.291506 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:45.291539 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:45.291685 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:45.327809 1055021 cri.go:89] found id: ""
	I1208 02:02:45.327885 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.327907 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:45.327925 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:45.328011 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:45.356269 1055021 cri.go:89] found id: ""
	I1208 02:02:45.356293 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.356302 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:45.356308 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:45.356386 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:45.385189 1055021 cri.go:89] found id: ""
	I1208 02:02:45.385213 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.385222 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:45.385229 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:45.385309 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:45.413524 1055021 cri.go:89] found id: ""
	I1208 02:02:45.413549 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.413558 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:45.413565 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:45.413652 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:45.443469 1055021 cri.go:89] found id: ""
	I1208 02:02:45.443547 1055021 logs.go:282] 0 containers: []
	W1208 02:02:45.443563 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:45.443572 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:45.443584 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:45.515350 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:45.515441 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:45.534931 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:45.534961 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:45.612239 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:45.604874   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.605260   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606565   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.606945   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:45.608416   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:45.612262 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:45.612274 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:45.640465 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:45.640503 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.170309 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:48.181762 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 02:02:48.181835 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 02:02:48.209264 1055021 cri.go:89] found id: ""
	I1208 02:02:48.209288 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.209297 1055021 logs.go:284] No container was found matching "kube-apiserver"
	I1208 02:02:48.209303 1055021 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 02:02:48.209364 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 02:02:48.236743 1055021 cri.go:89] found id: ""
	I1208 02:02:48.236766 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.236775 1055021 logs.go:284] No container was found matching "etcd"
	I1208 02:02:48.236782 1055021 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 02:02:48.236847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 02:02:48.275731 1055021 cri.go:89] found id: ""
	I1208 02:02:48.275757 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.275765 1055021 logs.go:284] No container was found matching "coredns"
	I1208 02:02:48.275772 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 02:02:48.275837 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 02:02:48.311639 1055021 cri.go:89] found id: ""
	I1208 02:02:48.311667 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.311676 1055021 logs.go:284] No container was found matching "kube-scheduler"
	I1208 02:02:48.311682 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 02:02:48.311744 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 02:02:48.342675 1055021 cri.go:89] found id: ""
	I1208 02:02:48.342711 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.342720 1055021 logs.go:284] No container was found matching "kube-proxy"
	I1208 02:02:48.342726 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 02:02:48.342808 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 02:02:48.369485 1055021 cri.go:89] found id: ""
	I1208 02:02:48.369519 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.369528 1055021 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 02:02:48.369535 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 02:02:48.369608 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 02:02:48.396744 1055021 cri.go:89] found id: ""
	I1208 02:02:48.396769 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.396778 1055021 logs.go:284] No container was found matching "kindnet"
	I1208 02:02:48.396785 1055021 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 02:02:48.396847 1055021 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 02:02:48.422870 1055021 cri.go:89] found id: ""
	I1208 02:02:48.422894 1055021 logs.go:282] 0 containers: []
	W1208 02:02:48.422904 1055021 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 02:02:48.422913 1055021 logs.go:123] Gathering logs for container status ...
	I1208 02:02:48.422927 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 02:02:48.454409 1055021 logs.go:123] Gathering logs for kubelet ...
	I1208 02:02:48.454482 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 02:02:48.522366 1055021 logs.go:123] Gathering logs for dmesg ...
	I1208 02:02:48.522456 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 02:02:48.541233 1055021 logs.go:123] Gathering logs for describe nodes ...
	I1208 02:02:48.541391 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 02:02:48.617160 1055021 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 02:02:48.609193   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.609610   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611274   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.611724   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:48.613173   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 02:02:48.617226 1055021 logs.go:123] Gathering logs for CRI-O ...
	I1208 02:02:48.617247 1055021 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 02:02:51.146382 1055021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 02:02:51.160619 1055021 out.go:203] 
	W1208 02:02:51.163425 1055021 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 02:02:51.163473 1055021 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 02:02:51.163484 1055021 out.go:285] * Related issues:
	W1208 02:02:51.163498 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 02:02:51.163517 1055021 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 02:02:51.166282 1055021 out.go:203] 
	
	
	==> CRI-O <==
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317270944Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317325255Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317374683Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317435303Z" level=info msg="RDT not available in the host system"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.317500313Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318427518Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318519039Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.318582121Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319471993Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319585217Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.319774265Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.320528701Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321124572Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.321312036Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371792319Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.371951033Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372008469Z" level=info msg="Create NRI interface"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372105816Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372118829Z" level=info msg="runtime interface created"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372130251Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372136659Z" level=info msg="runtime interface starting up..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372142583Z" level=info msg="starting plugins..."
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372154743Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:56:47 newest-cni-448023 crio[614]: time="2025-12-08T01:56:47.372216209Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:56:47 newest-cni-448023 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:03:04.574915   14007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:04.575853   14007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:04.577488   14007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:04.577794   14007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:03:04.579345   14007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.872677] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:13] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:14] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:15] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:03:04 up  6:45,  0 user,  load average: 1.54, 0.85, 1.13
	Linux newest-cni-448023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:02 newest-cni-448023 kubelet[13889]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:02 newest-cni-448023 kubelet[13889]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:02 newest-cni-448023 kubelet[13889]: E1208 02:03:02.826628   13889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:03:02 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:03:03 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 08 02:03:03 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:03 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:03 newest-cni-448023 kubelet[13910]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:03 newest-cni-448023 kubelet[13910]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:03 newest-cni-448023 kubelet[13910]: E1208 02:03:03.571387   13910 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:03:03 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:03:03 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:03:04 newest-cni-448023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 08 02:03:04 newest-cni-448023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:04 newest-cni-448023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:03:04 newest-cni-448023 kubelet[13933]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:04 newest-cni-448023 kubelet[13933]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:03:04 newest-cni-448023 kubelet[13933]: E1208 02:03:04.322470   13933 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:03:04 newest-cni-448023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:03:04 newest-cni-448023 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-448023 -n newest-cni-448023: exit status 2 (364.389271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-448023" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.109s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:06:15.784480  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1208 02:06:28.152088  791807 config.go:182] Loaded profile config "kindnet-000739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:07:29.415065  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:07:46.335786  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:08:51.933782  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:09:29.022976  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.029343  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.041053  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.062892  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.104353  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.185865  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.347462  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:09:29.669221  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:09:30.310992  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:09:31.593193  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:09:34.154530  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:09:52.722110  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/default-k8s-diff-port-993283/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1208 02:10:10.002288  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/auto-000739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 2 (452.740689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-389831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-389831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.436µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-389831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-389831
helpers_test.go:243: (dbg) docker inspect no-preload-389831:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	        "Created": "2025-12-08T01:40:32.167402442Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1047287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:50:50.554953574Z",
	            "FinishedAt": "2025-12-08T01:50:49.214340581Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hostname",
	        "HostsPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/hosts",
	        "LogPath": "/var/lib/docker/containers/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777/37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777-json.log",
	        "Name": "/no-preload-389831",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-389831:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-389831",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37e83e347e2b7ab4dd36971f6cc4ebf959d2a05816e972a6e230da54fd4ce777",
	                "LowerDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0-init/diff:/var/lib/docker/overlay2/4d33b09e935ed334bdd045620f6f7a8c50fa8b58a4df46b628961f3783d42726/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c42d6fd2445fddfbd8d0363a41980866eed41d27f62bbbdba088745d94c15a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-389831",
	                "Source": "/var/lib/docker/volumes/no-preload-389831/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-389831",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-389831",
	                "name.minikube.sigs.k8s.io": "no-preload-389831",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eaeeec708b96ab10f53f5e7226e115539fe166bf63ca544042e974e7018b260",
	            "SandboxKey": "/var/run/docker/netns/6eaeeec708b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-389831": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:00:7d:ce:0b:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49b509785d13da9a6b1bd627900832af9339129e0c331d938bcdf6ad31e4d2c7",
	                    "EndpointID": "795d8a30b86237e9ff6e670d6bc504ea3f9738fbb154a7d1d8e6085bd1fb8cce",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-389831",
	                        "37e83e347e2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 2 (414.180956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-389831 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-000739 sudo systemctl status kubelet --all --full --no-pager                                                                                  │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl cat kubelet --no-pager                                                                                                  │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                   │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl status docker --all --full --no-pager                                                                                   │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl cat docker --no-pager                                                                                                   │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /etc/docker/daemon.json                                                                                                       │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo docker system info                                                                                                                │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl cat cri-docker --no-pager                                                                                               │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cri-dockerd --version                                                                                                             │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl status containerd --all --full --no-pager                                                                               │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl cat containerd --no-pager                                                                                               │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /lib/systemd/system/containerd.service                                                                                        │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo cat /etc/containerd/config.toml                                                                                                   │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo containerd config dump                                                                                                            │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl status crio --all --full --no-pager                                                                                     │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo systemctl cat crio --no-pager                                                                                                     │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ ssh     │ -p custom-flannel-000739 sudo crio config                                                                                                                       │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ delete  │ -p custom-flannel-000739                                                                                                                                        │ custom-flannel-000739     │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │ 08 Dec 25 02:10 UTC │
	│ start   │ -p enable-default-cni-000739 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio │ enable-default-cni-000739 │ jenkins │ v1.37.0 │ 08 Dec 25 02:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 02:10:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 02:10:11.545452 1099973 out.go:360] Setting OutFile to fd 1 ...
	I1208 02:10:11.545642 1099973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:10:11.545673 1099973 out.go:374] Setting ErrFile to fd 2...
	I1208 02:10:11.545692 1099973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:10:11.545968 1099973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 02:10:11.546403 1099973 out.go:368] Setting JSON to false
	I1208 02:10:11.547414 1099973 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24744,"bootTime":1765135068,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 02:10:11.547521 1099973 start.go:143] virtualization:  
	I1208 02:10:11.551279 1099973 out.go:179] * [enable-default-cni-000739] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 02:10:11.556088 1099973 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 02:10:11.556164 1099973 notify.go:221] Checking for updates...
	I1208 02:10:11.562599 1099973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 02:10:11.565866 1099973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 02:10:11.569029 1099973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 02:10:11.572189 1099973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 02:10:11.575232 1099973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 02:10:11.578963 1099973 config.go:182] Loaded profile config "no-preload-389831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 02:10:11.579067 1099973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 02:10:11.610578 1099973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 02:10:11.610712 1099973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:10:11.680846 1099973 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:10:11.671375663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:10:11.680960 1099973 docker.go:319] overlay module found
	I1208 02:10:11.684162 1099973 out.go:179] * Using the docker driver based on user configuration
	I1208 02:10:11.687009 1099973 start.go:309] selected driver: docker
	I1208 02:10:11.687026 1099973 start.go:927] validating driver "docker" against <nil>
	I1208 02:10:11.687040 1099973 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 02:10:11.687784 1099973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:10:11.743872 1099973 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:10:11.734940142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:10:11.744034 1099973 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1208 02:10:11.744281 1099973 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1208 02:10:11.744313 1099973 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 02:10:11.747300 1099973 out.go:179] * Using Docker driver with root privileges
	I1208 02:10:11.750229 1099973 cni.go:84] Creating CNI manager for "bridge"
	I1208 02:10:11.750247 1099973 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 02:10:11.750329 1099973 start.go:353] cluster config:
	{Name:enable-default-cni-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:10:11.757707 1099973 out.go:179] * Starting "enable-default-cni-000739" primary control-plane node in "enable-default-cni-000739" cluster
	I1208 02:10:11.767270 1099973 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 02:10:11.770253 1099973 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 02:10:11.773115 1099973 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:10:11.773160 1099973 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 02:10:11.773171 1099973 cache.go:65] Caching tarball of preloaded images
	I1208 02:10:11.773270 1099973 preload.go:238] Found /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1208 02:10:11.773281 1099973 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 02:10:11.773390 1099973 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/config.json ...
	I1208 02:10:11.773407 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/config.json: {Name:mk33b8b701c5b8dabe38c056e66f2c498c4611f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:11.773562 1099973 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 02:10:11.797425 1099973 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 02:10:11.797443 1099973 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 02:10:11.797458 1099973 cache.go:243] Successfully downloaded all kic artifacts
	I1208 02:10:11.797489 1099973 start.go:360] acquireMachinesLock for enable-default-cni-000739: {Name:mkf3ef2849fb201ccfbecf3150440366078b2112 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 02:10:11.797583 1099973 start.go:364] duration metric: took 80.108µs to acquireMachinesLock for "enable-default-cni-000739"
	I1208 02:10:11.797612 1099973 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-000739 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 02:10:11.797675 1099973 start.go:125] createHost starting for "" (driver="docker")
	I1208 02:10:11.801101 1099973 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 02:10:11.801317 1099973 start.go:159] libmachine.API.Create for "enable-default-cni-000739" (driver="docker")
	I1208 02:10:11.801344 1099973 client.go:173] LocalClient.Create starting
	I1208 02:10:11.801407 1099973 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem
	I1208 02:10:11.801436 1099973 main.go:143] libmachine: Decoding PEM data...
	I1208 02:10:11.801455 1099973 main.go:143] libmachine: Parsing certificate...
	I1208 02:10:11.801514 1099973 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem
	I1208 02:10:11.801534 1099973 main.go:143] libmachine: Decoding PEM data...
	I1208 02:10:11.801545 1099973 main.go:143] libmachine: Parsing certificate...
	I1208 02:10:11.801904 1099973 cli_runner.go:164] Run: docker network inspect enable-default-cni-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 02:10:11.826608 1099973 cli_runner.go:211] docker network inspect enable-default-cni-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 02:10:11.826692 1099973 network_create.go:284] running [docker network inspect enable-default-cni-000739] to gather additional debugging logs...
	I1208 02:10:11.826710 1099973 cli_runner.go:164] Run: docker network inspect enable-default-cni-000739
	W1208 02:10:11.843831 1099973 cli_runner.go:211] docker network inspect enable-default-cni-000739 returned with exit code 1
	I1208 02:10:11.843871 1099973 network_create.go:287] error running [docker network inspect enable-default-cni-000739]: docker network inspect enable-default-cni-000739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-000739 not found
	I1208 02:10:11.843887 1099973 network_create.go:289] output of [docker network inspect enable-default-cni-000739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-000739 not found
	
	** /stderr **
	I1208 02:10:11.844026 1099973 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:10:11.861306 1099973 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
	I1208 02:10:11.861632 1099973 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f1a6e8e2920e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:68:b4:51:62:af} reservation:<nil>}
	I1208 02:10:11.861997 1099973 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f53410dda724 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:9d:f4:4c:a1:2c} reservation:<nil>}
	I1208 02:10:11.862277 1099973 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-49b509785d13 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:6e:82:d5:2d:44} reservation:<nil>}
	I1208 02:10:11.862758 1099973 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5480}
	I1208 02:10:11.862782 1099973 network_create.go:124] attempt to create docker network enable-default-cni-000739 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 02:10:11.862859 1099973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-000739 enable-default-cni-000739
	I1208 02:10:11.923063 1099973 network_create.go:108] docker network enable-default-cni-000739 192.168.85.0/24 created
	I1208 02:10:11.923104 1099973 kic.go:121] calculated static IP "192.168.85.2" for the "enable-default-cni-000739" container
	I1208 02:10:11.923192 1099973 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 02:10:11.940757 1099973 cli_runner.go:164] Run: docker volume create enable-default-cni-000739 --label name.minikube.sigs.k8s.io=enable-default-cni-000739 --label created_by.minikube.sigs.k8s.io=true
	I1208 02:10:11.959101 1099973 oci.go:103] Successfully created a docker volume enable-default-cni-000739
	I1208 02:10:11.959192 1099973 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-000739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-000739 --entrypoint /usr/bin/test -v enable-default-cni-000739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 02:10:12.479825 1099973 oci.go:107] Successfully prepared a docker volume enable-default-cni-000739
	I1208 02:10:12.479901 1099973 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:10:12.479913 1099973 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 02:10:12.479981 1099973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-000739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 02:10:16.550472 1099973 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-000739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.070438598s)
	I1208 02:10:16.550511 1099973 kic.go:203] duration metric: took 4.070594383s to extract preloaded images to volume ...
	W1208 02:10:16.550666 1099973 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 02:10:16.550792 1099973 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 02:10:16.611358 1099973 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-000739 --name enable-default-cni-000739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-000739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-000739 --network enable-default-cni-000739 --ip 192.168.85.2 --volume enable-default-cni-000739:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 02:10:16.932348 1099973 cli_runner.go:164] Run: docker container inspect enable-default-cni-000739 --format={{.State.Running}}
	I1208 02:10:16.953283 1099973 cli_runner.go:164] Run: docker container inspect enable-default-cni-000739 --format={{.State.Status}}
	I1208 02:10:16.977062 1099973 cli_runner.go:164] Run: docker exec enable-default-cni-000739 stat /var/lib/dpkg/alternatives/iptables
	I1208 02:10:17.039514 1099973 oci.go:144] the created container "enable-default-cni-000739" has a running status.
	I1208 02:10:17.039545 1099973 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa...
	I1208 02:10:17.165102 1099973 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 02:10:17.190045 1099973 cli_runner.go:164] Run: docker container inspect enable-default-cni-000739 --format={{.State.Status}}
	I1208 02:10:17.209711 1099973 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 02:10:17.209731 1099973 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-000739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 02:10:17.260350 1099973 cli_runner.go:164] Run: docker container inspect enable-default-cni-000739 --format={{.State.Status}}
	I1208 02:10:17.283907 1099973 machine.go:94] provisionDockerMachine start ...
	I1208 02:10:17.284014 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:17.307447 1099973 main.go:143] libmachine: Using SSH client type: native
	I1208 02:10:17.308107 1099973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1208 02:10:17.308123 1099973 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 02:10:17.308910 1099973 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 02:10:20.462482 1099973 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-000739
	
	I1208 02:10:20.462507 1099973 ubuntu.go:182] provisioning hostname "enable-default-cni-000739"
	I1208 02:10:20.462586 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:20.479643 1099973 main.go:143] libmachine: Using SSH client type: native
	I1208 02:10:20.479965 1099973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1208 02:10:20.479980 1099973 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-000739 && echo "enable-default-cni-000739" | sudo tee /etc/hostname
	I1208 02:10:20.639944 1099973 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-000739
	
	I1208 02:10:20.640021 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:20.658014 1099973 main.go:143] libmachine: Using SSH client type: native
	I1208 02:10:20.658327 1099973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1208 02:10:20.658350 1099973 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-000739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-000739/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-000739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 02:10:20.814967 1099973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 02:10:20.814994 1099973 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-789938/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-789938/.minikube}
	I1208 02:10:20.815023 1099973 ubuntu.go:190] setting up certificates
	I1208 02:10:20.815039 1099973 provision.go:84] configureAuth start
	I1208 02:10:20.815097 1099973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-000739
	I1208 02:10:20.832464 1099973 provision.go:143] copyHostCerts
	I1208 02:10:20.832535 1099973 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem, removing ...
	I1208 02:10:20.832548 1099973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem
	I1208 02:10:20.832626 1099973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/ca.pem (1078 bytes)
	I1208 02:10:20.832720 1099973 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem, removing ...
	I1208 02:10:20.832731 1099973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem
	I1208 02:10:20.832758 1099973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/cert.pem (1123 bytes)
	I1208 02:10:20.832821 1099973 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem, removing ...
	I1208 02:10:20.832830 1099973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem
	I1208 02:10:20.832856 1099973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-789938/.minikube/key.pem (1675 bytes)
	I1208 02:10:20.832911 1099973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-000739 san=[127.0.0.1 192.168.85.2 enable-default-cni-000739 localhost minikube]
	I1208 02:10:20.948861 1099973 provision.go:177] copyRemoteCerts
	I1208 02:10:20.948928 1099973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 02:10:20.948971 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:20.965911 1099973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa Username:docker}
	I1208 02:10:21.074906 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 02:10:21.092863 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 02:10:21.110752 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 02:10:21.128110 1099973 provision.go:87] duration metric: took 313.045948ms to configureAuth
	I1208 02:10:21.128141 1099973 ubuntu.go:206] setting minikube options for container-runtime
	I1208 02:10:21.128356 1099973 config.go:182] Loaded profile config "enable-default-cni-000739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 02:10:21.128469 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:21.145428 1099973 main.go:143] libmachine: Using SSH client type: native
	I1208 02:10:21.145747 1099973 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1208 02:10:21.145768 1099973 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 02:10:21.471003 1099973 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 02:10:21.471023 1099973 machine.go:97] duration metric: took 4.187098326s to provisionDockerMachine
	I1208 02:10:21.471049 1099973 client.go:176] duration metric: took 9.669682679s to LocalClient.Create
	I1208 02:10:21.471067 1099973 start.go:167] duration metric: took 9.669751185s to libmachine.API.Create "enable-default-cni-000739"
	I1208 02:10:21.471077 1099973 start.go:293] postStartSetup for "enable-default-cni-000739" (driver="docker")
	I1208 02:10:21.471088 1099973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 02:10:21.471152 1099973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 02:10:21.471196 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:21.489226 1099973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa Username:docker}
	I1208 02:10:21.606933 1099973 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 02:10:21.610490 1099973 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 02:10:21.610523 1099973 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 02:10:21.610535 1099973 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/addons for local assets ...
	I1208 02:10:21.610594 1099973 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-789938/.minikube/files for local assets ...
	I1208 02:10:21.610698 1099973 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem -> 7918072.pem in /etc/ssl/certs
	I1208 02:10:21.610803 1099973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 02:10:21.618449 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 02:10:21.636389 1099973 start.go:296] duration metric: took 165.296853ms for postStartSetup
	I1208 02:10:21.636751 1099973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-000739
	I1208 02:10:21.653623 1099973 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/config.json ...
	I1208 02:10:21.653912 1099973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 02:10:21.653971 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:21.670657 1099973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa Username:docker}
	I1208 02:10:21.771906 1099973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 02:10:21.776503 1099973 start.go:128] duration metric: took 9.978814525s to createHost
	I1208 02:10:21.776526 1099973 start.go:83] releasing machines lock for "enable-default-cni-000739", held for 9.978934657s
	I1208 02:10:21.776596 1099973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-000739
	I1208 02:10:21.793676 1099973 ssh_runner.go:195] Run: cat /version.json
	I1208 02:10:21.793726 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:21.794006 1099973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 02:10:21.794063 1099973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-000739
	I1208 02:10:21.811772 1099973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa Username:docker}
	I1208 02:10:21.822280 1099973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/enable-default-cni-000739/id_rsa Username:docker}
	I1208 02:10:21.914565 1099973 ssh_runner.go:195] Run: systemctl --version
	I1208 02:10:22.009321 1099973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 02:10:22.050236 1099973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 02:10:22.055720 1099973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 02:10:22.055799 1099973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 02:10:22.085511 1099973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 02:10:22.085536 1099973 start.go:496] detecting cgroup driver to use...
	I1208 02:10:22.085568 1099973 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 02:10:22.085632 1099973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 02:10:22.103987 1099973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 02:10:22.117282 1099973 docker.go:218] disabling cri-docker service (if available) ...
	I1208 02:10:22.117433 1099973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 02:10:22.134621 1099973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 02:10:22.152681 1099973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 02:10:22.276763 1099973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 02:10:22.405533 1099973 docker.go:234] disabling docker service ...
	I1208 02:10:22.405614 1099973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 02:10:22.426036 1099973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 02:10:22.439153 1099973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 02:10:22.560407 1099973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 02:10:22.683617 1099973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 02:10:22.696177 1099973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 02:10:22.710433 1099973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 02:10:22.710565 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.720460 1099973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 02:10:22.720626 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.730774 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.740206 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.749338 1099973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 02:10:22.757621 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.766412 1099973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.780207 1099973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 02:10:22.789211 1099973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 02:10:22.796916 1099973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 02:10:22.804673 1099973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:10:22.924181 1099973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 02:10:23.122379 1099973 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 02:10:23.122492 1099973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 02:10:23.126552 1099973 start.go:564] Will wait 60s for crictl version
	I1208 02:10:23.126616 1099973 ssh_runner.go:195] Run: which crictl
	I1208 02:10:23.130342 1099973 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 02:10:23.157578 1099973 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1208 02:10:23.157717 1099973 ssh_runner.go:195] Run: crio --version
	I1208 02:10:23.185475 1099973 ssh_runner.go:195] Run: crio --version
	I1208 02:10:23.215999 1099973 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1208 02:10:23.219122 1099973 cli_runner.go:164] Run: docker network inspect enable-default-cni-000739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:10:23.235521 1099973 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 02:10:23.239494 1099973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:10:23.249217 1099973 kubeadm.go:884] updating cluster {Name:enable-default-cni-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-000739 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 02:10:23.249343 1099973 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 02:10:23.249397 1099973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:10:23.280315 1099973 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 02:10:23.280339 1099973 crio.go:433] Images already preloaded, skipping extraction
	I1208 02:10:23.280397 1099973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:10:23.305550 1099973 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 02:10:23.305576 1099973 cache_images.go:86] Images are preloaded, skipping loading
	I1208 02:10:23.305583 1099973 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1208 02:10:23.305666 1099973 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=enable-default-cni-000739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-000739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1208 02:10:23.305747 1099973 ssh_runner.go:195] Run: crio config
	I1208 02:10:23.387243 1099973 cni.go:84] Creating CNI manager for "bridge"
	I1208 02:10:23.387281 1099973 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 02:10:23.387304 1099973 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-000739 NodeName:enable-default-cni-000739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 02:10:23.387443 1099973 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-000739"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 02:10:23.387527 1099973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 02:10:23.395274 1099973 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 02:10:23.395344 1099973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 02:10:23.402742 1099973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1208 02:10:23.415508 1099973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 02:10:23.428429 1099973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1208 02:10:23.441207 1099973 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 02:10:23.444844 1099973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:10:23.454557 1099973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:10:23.580831 1099973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:10:23.597248 1099973 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739 for IP: 192.168.85.2
	I1208 02:10:23.597307 1099973 certs.go:195] generating shared ca certs ...
	I1208 02:10:23.597348 1099973 certs.go:227] acquiring lock for ca certs: {Name:mk9de40d9901eed6af6964a234a1e3f47bec86e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:23.597539 1099973 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key
	I1208 02:10:23.597628 1099973 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key
	I1208 02:10:23.597655 1099973 certs.go:257] generating profile certs ...
	I1208 02:10:23.597751 1099973 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.key
	I1208 02:10:23.597804 1099973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.crt with IP's: []
	I1208 02:10:24.108130 1099973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.crt ...
	I1208 02:10:24.108170 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.crt: {Name:mkc3d6aa97066f4d3a11d4e41c759313ed732023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.108376 1099973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.key ...
	I1208 02:10:24.108395 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/client.key: {Name:mk240b1e09d5c029d0c3ed73f6506c3b47b74734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.108492 1099973 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key.58ccb55f
	I1208 02:10:24.108515 1099973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt.58ccb55f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 02:10:24.238631 1099973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt.58ccb55f ...
	I1208 02:10:24.238660 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt.58ccb55f: {Name:mkff2b93a32d6f78b6ae223e85a01dfbd1a756a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.238838 1099973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key.58ccb55f ...
	I1208 02:10:24.238869 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key.58ccb55f: {Name:mke91a41613b922f61b9e99358aecc063984e609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.238957 1099973 certs.go:382] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt.58ccb55f -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt
	I1208 02:10:24.239042 1099973 certs.go:386] copying /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key.58ccb55f -> /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key
	I1208 02:10:24.239100 1099973 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.key
	I1208 02:10:24.239118 1099973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.crt with IP's: []
	I1208 02:10:24.424568 1099973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.crt ...
	I1208 02:10:24.424597 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.crt: {Name:mkb3fd67a93f1448e3f1ae1f766a329ff658893f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.424763 1099973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.key ...
	I1208 02:10:24.424781 1099973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.key: {Name:mke60aba6191341c6af6de819cefb1d621ffa613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:10:24.424965 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem (1338 bytes)
	W1208 02:10:24.425016 1099973 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807_empty.pem, impossibly tiny 0 bytes
	I1208 02:10:24.425030 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca-key.pem (1679 bytes)
	I1208 02:10:24.425058 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/ca.pem (1078 bytes)
	I1208 02:10:24.425089 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/cert.pem (1123 bytes)
	I1208 02:10:24.425121 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/certs/key.pem (1675 bytes)
	I1208 02:10:24.425173 1099973 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem (1708 bytes)
	I1208 02:10:24.425726 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 02:10:24.445917 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 02:10:24.465562 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 02:10:24.485097 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 02:10:24.512034 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1208 02:10:24.582562 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 02:10:24.603933 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 02:10:24.623692 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/enable-default-cni-000739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 02:10:24.642936 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/ssl/certs/7918072.pem --> /usr/share/ca-certificates/7918072.pem (1708 bytes)
	I1208 02:10:24.661764 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 02:10:24.680592 1099973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-789938/.minikube/certs/791807.pem --> /usr/share/ca-certificates/791807.pem (1338 bytes)
	I1208 02:10:24.698483 1099973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 02:10:24.711655 1099973 ssh_runner.go:195] Run: openssl version
	I1208 02:10:24.718000 1099973 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7918072.pem
	I1208 02:10:24.725659 1099973 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7918072.pem /etc/ssl/certs/7918072.pem
	I1208 02:10:24.733118 1099973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7918072.pem
	I1208 02:10:24.737101 1099973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:23 /usr/share/ca-certificates/7918072.pem
	I1208 02:10:24.737167 1099973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7918072.pem
	I1208 02:10:24.778488 1099973 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 02:10:24.786577 1099973 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7918072.pem /etc/ssl/certs/3ec20f2e.0
	I1208 02:10:24.794139 1099973 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:10:24.801619 1099973 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 02:10:24.809119 1099973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:10:24.812876 1099973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:13 /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:10:24.812942 1099973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:10:24.854890 1099973 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 02:10:24.862683 1099973 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 02:10:24.870031 1099973 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/791807.pem
	I1208 02:10:24.877343 1099973 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/791807.pem /etc/ssl/certs/791807.pem
	I1208 02:10:24.884728 1099973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791807.pem
	I1208 02:10:24.888680 1099973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:23 /usr/share/ca-certificates/791807.pem
	I1208 02:10:24.888746 1099973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791807.pem
	I1208 02:10:24.929407 1099973 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 02:10:24.936948 1099973 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/791807.pem /etc/ssl/certs/51391683.0
	I1208 02:10:24.944172 1099973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 02:10:24.947738 1099973 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 02:10:24.947792 1099973 kubeadm.go:401] StartCluster: {Name:enable-default-cni-000739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-000739 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:10:24.947905 1099973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 02:10:24.947963 1099973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 02:10:24.974825 1099973 cri.go:89] found id: ""
	I1208 02:10:24.974914 1099973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 02:10:24.982686 1099973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 02:10:24.990319 1099973 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 02:10:24.990404 1099973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 02:10:24.997819 1099973 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 02:10:24.997846 1099973 kubeadm.go:158] found existing configuration files:
	
	I1208 02:10:24.997930 1099973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 02:10:25.008335 1099973 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 02:10:25.008450 1099973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 02:10:25.018292 1099973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 02:10:25.027809 1099973 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 02:10:25.027924 1099973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 02:10:25.041601 1099973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 02:10:25.050452 1099973 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 02:10:25.050546 1099973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 02:10:25.058868 1099973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 02:10:25.068663 1099973 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 02:10:25.068812 1099973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 02:10:25.078243 1099973 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 02:10:25.123579 1099973 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 02:10:25.123962 1099973 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 02:10:25.150665 1099973 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 02:10:25.150750 1099973 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 02:10:25.150794 1099973 kubeadm.go:319] OS: Linux
	I1208 02:10:25.150863 1099973 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 02:10:25.150924 1099973 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 02:10:25.150976 1099973 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 02:10:25.151026 1099973 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 02:10:25.151080 1099973 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 02:10:25.151141 1099973 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 02:10:25.151190 1099973 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 02:10:25.151242 1099973 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 02:10:25.151292 1099973 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 02:10:25.240716 1099973 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 02:10:25.240856 1099973 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 02:10:25.240965 1099973 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 02:10:25.251268 1099973 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 02:10:25.258216 1099973 out.go:252]   - Generating certificates and keys ...
	I1208 02:10:25.258324 1099973 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 02:10:25.258402 1099973 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 02:10:25.578755 1099973 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 02:10:26.411151 1099973 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> CRI-O <==
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072876779Z" level=info msg="Using the internal default seccomp profile"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072883992Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072889867Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072896496Z" level=info msg="RDT not available in the host system"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.072909567Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073778565Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073798208Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.073814225Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074485379Z" level=info msg="Conmon does support the --sync option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074501871Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.074630463Z" level=info msg="Updated default CNI network name to "
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.075394984Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07576115Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.07584312Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123803822Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.123963487Z" level=info msg="Starting seccomp notifier watcher"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124019635Z" level=info msg="Create NRI interface"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124120732Z" level=info msg="built-in NRI default validator is disabled"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124136092Z" level=info msg="runtime interface created"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124147144Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124154217Z" level=info msg="runtime interface starting up..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124160937Z" level=info msg="starting plugins..."
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124173171Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:50:57 no-preload-389831 crio[614]: time="2025-12-08T01:50:57.124229549Z" level=info msg="No systemd watchdog enabled"
	Dec 08 01:50:57 no-preload-389831 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:10:31.958775   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:10:31.959907   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:10:31.961609   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:10:31.961900   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:10:31.963352   10294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 01:16] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:17] overlayfs: idmapped layers are currently not supported
	[ +14.867099] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:18] overlayfs: idmapped layers are currently not supported
	[ +19.526684] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:19] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:20] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:22] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:23] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:34] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:36] overlayfs: idmapped layers are currently not supported
	[ +45.641522] overlayfs: idmapped layers are currently not supported
	[  +2.682342] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:37] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:38] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:39] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:40] overlayfs: idmapped layers are currently not supported
	[ +14.002656] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:42] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:43] overlayfs: idmapped layers are currently not supported
	[Dec 8 01:45] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:03] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:05] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:07] overlayfs: idmapped layers are currently not supported
	[Dec 8 02:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:10:32 up  6:52,  0 user,  load average: 1.55, 1.71, 1.44
	Linux no-preload-389831 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:10:29 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:29 no-preload-389831 kubelet[10166]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:29 no-preload-389831 kubelet[10166]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:29 no-preload-389831 kubelet[10166]: E1208 02:10:29.839878   10166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:10:29 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:10:29 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:10:30 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1563.
	Dec 08 02:10:30 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:30 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:30 no-preload-389831 kubelet[10172]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:30 no-preload-389831 kubelet[10172]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:30 no-preload-389831 kubelet[10172]: E1208 02:10:30.596923   10172 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:10:30 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:10:30 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:10:31 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 08 02:10:31 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:31 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:31 no-preload-389831 kubelet[10206]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:31 no-preload-389831 kubelet[10206]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 08 02:10:31 no-preload-389831 kubelet[10206]: E1208 02:10:31.363755   10206 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:10:31 no-preload-389831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:10:31 no-preload-389831 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:10:32 no-preload-389831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1565.
	Dec 08 02:10:32 no-preload-389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:10:32 no-preload-389831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-389831 -n no-preload-389831: exit status 2 (417.14374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-389831" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (269.67s)
E1208 02:12:21.451403  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/no-preload-389831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (36m14s)
		TestNetworkPlugins/group/bridge (25s)
		TestNetworkPlugins/group/bridge/Start (25s)

                                                
                                                
goroutine 6641 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000412540, 0x40006f3bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40006c2078, {0x534c580, 0x2c, 0x2c}, {0x40006f3d08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40007fcb40)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40007fcb40)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 4006 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001549260, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4001
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1065 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40017c3e00, 0x4001916620)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 751
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 158 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 157
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6436 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017390e0, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6404
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5547 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x40015001c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5546
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3179 [chan receive, 36 minutes]:
testing.(*T).Run(0x400186e000, {0x296d53f?, 0x13bfa681195a?}, 0x4001ae84c8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x400186e000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x400186e000, 0x339b528)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6611 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x13, 0x4001753c38, 0x4, 0x4001ab4fc0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4001753d98?, 0x1929a0?, 0xffffc685c1a1?, 0x0?, 0x4000187680?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40016cc7c0)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4001753d68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001ab7c80)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001ab7c80)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400169a1c0, 0x4001ab7c80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x400169a1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x400169a1c0, 0x40012d6030)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3557
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3706 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40016f25d0, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016f25c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001481080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013db180?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x40005556a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x4001405f38, {0x369d760, 0x40019ec030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40005557a8?, {0x369d760?, 0x40019ec030?}, 0xb0?, 0x40000717c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001782030, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3734
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6193 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x400153e058?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6157
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 141 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000846ea0, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 133
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4246 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4245
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 140 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x40013e2fc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 133
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3733 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x4001c77500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3729
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4005 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x40003b61d8?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4001
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3490 [chan receive, 9 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001c77880, 0x4001ae84c8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3179
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 157 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x40000a5740, 0x4001471f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x18?, 0x40000a5740, 0x40000a5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400033c300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 141
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 831 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 830
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 156 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40004cfd90, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40004cfd80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000846ea0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000298fc0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x40005596a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x4001406f38, {0x369d760, 0x40012f4de0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40005597a8?, {0x369d760?, 0x40012f4de0?}, 0x60?, 0x40001e56b8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40012efd00, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 141
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1297 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff4db5a800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40019ba180?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40019ba180)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40019ba180)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40019f0f40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40019f0f40)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x400172d800, {0x36d3200, 0x40019f0f40})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x400172d800)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1295
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5853 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x40013bd740, 0x40013bd788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x84?, 0x40013bd740, 0x40013bd788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x40013bd750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f3510?, 0x4000294000?, 0x4001500380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4011 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4010
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6440 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x400055b740, 0x400055b788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x0?, 0x400055b740, 0x400055b788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x36e5858?, 0x4001ac8850?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001ac8770?, 0x0?, 0x4001aa5080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6436
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3734 [chan receive, 34 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001481080, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3729
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6197 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x400180b710, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400180b700)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001480f00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014d8fc0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x4000555ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x400146df38, {0x369d760, 0x4001b3faa0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d760?, 0x4001b3faa0?}, 0xf0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001855dc0, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6194
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3707 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x400136bf40, 0x400136bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x58?, 0x400136bf40, 0x400136bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001688f00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3734
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1521 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40019f1490, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019f1480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400185fe00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003fea80?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x4001368f38, {0x369d760, 0x4001a98b10}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3510?, {0x369d760?, 0x4001a98b10?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a96950, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1497
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 663 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff4db5ae00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000208800?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4000208800)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4000208800)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4000849700)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4000849700)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4002512000, {0x36d3200, 0x4000849700})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4002512000)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 661
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 1497 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400185fe00, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1477
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1110 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x40018a3680)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1107
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 1034 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400184aa80, 0x4001840af0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1033
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1523 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1522
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5552 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x4001b50f40, 0x4001b50f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x0?, 0x4001b50f40, 0x4001b50f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x36e5858?, 0x40014be4d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40014be3f0?, 0x0?, 0x400186f180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5548
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1109 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x40018a3680)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1107
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1522 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x4001b50740, 0x4001403f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x51?, 0x4001b50740, 0x4001b50788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x1?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1497
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5852 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x400180a0d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400180a0c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014eec60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014d9810?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x4000555ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x4001751f38, {0x369d760, 0x40012dd8c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d760?, 0x40012dd8c0?}, 0x9c?, 0x400155f980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001aa8f30, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6198 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x40013b8f40, 0x40013b8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x64?, 0x40013b8f40, 0x40013b8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x40013b8f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f3510?, 0x4000294000?, 0x400153e058?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6194
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4241 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400185f380, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5214 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5213
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1084 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40018d9080, 0x4001841ea0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1083
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6409 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x40016580b8?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6408
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 829 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x400180b110, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400180b100)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000846ba0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000555e88?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0xffff946245c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x4001470f38, {0x369d760, 0x400167b7d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3510?, {0x369d760?, 0x400167b7d0?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40025167c0, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 855
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 830 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x40000a1f40, 0x40006f7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x51?, 0x40000a1f40, 0x40000a1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x40016c4a80?, 0x400163fa40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400155e780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 855
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5210 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400185e480, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5197
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5871 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014eec60, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5869
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5854 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5853
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 854 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x40015aa380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 855 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000846ba0, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1979 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000430780, 0x40013daa80)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1468
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6199 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6198
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3708 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3707
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1871 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400033cd80, 0x40013da8c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1870
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6415 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6414
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5548 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001549980, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5546
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5209 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x4001688480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5197
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1496 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1477
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6614 [select]:
os/exec.(*Cmd).watchCtx(0x4001ab7c80, 0x4001c74460)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6611
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3557 [chan receive]:
testing.(*T).Run(0x4001458700, {0x296d544?, 0x368a030?}, 0x40012d6030)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001458700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x4001458700, 0x4000208600)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3490
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4245 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x4001b4cf40, 0x400131bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0xab?, 0x4001b4cf40, 0x4001b4cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x0?, 0x4001b4cf50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f3510?, 0x4000294000?, 0x400186f180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4241
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5213 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x4000554740, 0x4000554788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x0?, 0x4000554740, 0x4000554788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x36e5858?, 0x40014c3ce0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40014c3c00?, 0x0?, 0x4001b38480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5210
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4224 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x400186f180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6441 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6440
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5870 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x4001500380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5869
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5551 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001326c90, 0xe)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001326c80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001549980)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001724d20?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x400009e6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x40000d3f38, {0x369d760, 0x40012ebec0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d760?, 0x40012ebec0?}, 0x0?, 0x36e5858?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001aa8630, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5548
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6410 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001a64de0, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6408
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4009 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40016f3250, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016f3240)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001549260)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013daaf0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x4001b506a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x400136cf38, {0x369d760, 0x4001b3f830}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001b507a8?, {0x369d760?, 0x4001b3f830?}, 0xb0?, 0x4001689200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001782d60, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4006
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6413 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40016f30d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016f30c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001a64de0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001840310?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x400131df38, {0x369d760, 0x4001790990}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d760?, 0x4001790990?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001782f50, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1883 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400033d980, 0x40013db420)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1882
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4244 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40016f2690, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016f2680)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400185f380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013daaf0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x40013bdea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x40006fcf38, {0x369d760, 0x40019ed020}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013bdfa8?, {0x369d760?, 0x40019ed020?}, 0xc0?, 0x40016b5800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001783170, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4241
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6414 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x4001b4bf40, 0x4001b4bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x0?, 0x4001b4bf40, 0x4001b4bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x36e5858?, 0x40015e4a10?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40015e4930?, 0x0?, 0x4001610300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6410
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6439 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000849490, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000849480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017390e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001eca688?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x40002fe6a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x400146cf38, {0x369d760, 0x40019edad0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40002fe4d0?, {0x369d760?, 0x40019edad0?}, 0x1?, 0x36e5858?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400084b690, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6436
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4010 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5bf0, 0x4000298070}, 0x40000a5f40, 0x40000a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5bf0, 0x4000298070}, 0x90?, 0x40000a5f40, 0x40000a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5bf0?, 0x4000298070?}, 0x36e5858?, 0x40013dbb90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000430480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4006
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5553 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5552
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6194 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001480f00, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6157
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6613 [IO wait]:
internal/poll.runtime_pollWait(0xffff4d770200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001b7f560?, 0x4001cae8db?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001b7f560, {0x4001cae8db, 0xd725, 0xd725})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400071a630, {0x4001cae8db?, 0x4001ec8568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012d63c0, {0x369bb38, 0x40000a6b50})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bd20, 0x40012d63c0}, {0x369bb38, 0x40000a6b50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400071a630?, {0x369bd20, 0x40012d63c0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400071a630, {0x369bd20, 0x40012d63c0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bd20, 0x40012d63c0}, {0x369bbb8, 0x400071a630}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400169a700?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6611
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5212 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40019f1c50, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019f1c40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400185e480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001916380?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5bf0?, 0x4000298070?}, 0x4000299f78?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5bf0, 0x4000298070}, 0x400131ff38, {0x369d760, 0x40019a58c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d760?, 0x40019a58c0?}, 0x90?, 0x36e5858?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a96be0, 0x3b9aca00, 0x0, 0x1, 0x4000298070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5210
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6435 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe960, {{0x36f3510, 0x4000294000?}, 0x36f76d0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6404
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6612 [IO wait]:
internal/poll.runtime_pollWait(0xffff4d770400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001b7f4a0?, 0x40014dda78?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001b7f4a0, {0x40014dda78, 0x588, 0x588})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400071a618, {0x40014dda78?, 0x4001ec7d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012d6300, {0x369bb38, 0x40000a6b40})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bd20, 0x40012d6300}, {0x369bb38, 0x40000a6b40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400071a618?, {0x369bd20, 0x40012d6300})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400071a618, {0x369bd20, 0x40012d6300})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bd20, 0x40012d6300}, {0x369bbb8, 0x400071a618}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400169a1c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6611
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                    

Test pass (271/364)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.48
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 6.98
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.1
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 7.65
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.59
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 155.49
40 TestAddons/serial/GCPAuth/Namespaces 0.21
41 TestAddons/serial/GCPAuth/FakeCredentials 10.83
57 TestAddons/StoppedEnableDisable 12.41
58 TestCertOptions 42.49
59 TestCertExpiration 237.28
61 TestForceSystemdFlag 35.04
62 TestForceSystemdEnv 44.63
67 TestErrorSpam/setup 30.43
68 TestErrorSpam/start 0.78
69 TestErrorSpam/status 1.15
70 TestErrorSpam/pause 6.97
71 TestErrorSpam/unpause 5.7
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.35
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 43.59
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
84 TestFunctional/serial/CacheCmd/cache/add_local 1.5
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.15
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 32.36
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.46
95 TestFunctional/serial/LogsFileCmd 1.48
96 TestFunctional/serial/InvalidService 4.79
98 TestFunctional/parallel/ConfigCmd 0.51
99 TestFunctional/parallel/DashboardCmd 9.34
100 TestFunctional/parallel/DryRun 0.43
101 TestFunctional/parallel/InternationalLanguage 0.22
102 TestFunctional/parallel/StatusCmd 1.3
106 TestFunctional/parallel/ServiceCmdConnect 7.81
107 TestFunctional/parallel/AddonsCmd 0.23
108 TestFunctional/parallel/PersistentVolumeClaim 24.79
110 TestFunctional/parallel/SSHCmd 0.82
111 TestFunctional/parallel/CpCmd 2.61
113 TestFunctional/parallel/FileSync 0.29
114 TestFunctional/parallel/CertSync 2.28
118 TestFunctional/parallel/NodeLabels 0.19
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
122 TestFunctional/parallel/License 0.32
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
135 TestFunctional/parallel/ServiceCmd/List 0.52
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
138 TestFunctional/parallel/ProfileCmd/profile_list 0.52
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
141 TestFunctional/parallel/ServiceCmd/Format 0.49
142 TestFunctional/parallel/ServiceCmd/URL 0.69
143 TestFunctional/parallel/MountCmd/any-port 7.89
144 TestFunctional/parallel/MountCmd/specific-port 2.24
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.69
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 1.32
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.88
153 TestFunctional/parallel/ImageCommands/Setup 0.69
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
157 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.46
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.11
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.33
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.8
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.9
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.9
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.45
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.47
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.32
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.87
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.67
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.69
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.99
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.28
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.53
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.06
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.2
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.81
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.75
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.42
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 197.55
265 TestMultiControlPlane/serial/DeployApp 7.53
266 TestMultiControlPlane/serial/PingHostFromPods 1.47
267 TestMultiControlPlane/serial/AddWorkerNode 59.17
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
270 TestMultiControlPlane/serial/CopyFile 20.1
271 TestMultiControlPlane/serial/StopSecondaryNode 12.86
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
273 TestMultiControlPlane/serial/RestartSecondaryNode 30.46
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.44
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
278 TestMultiControlPlane/serial/StopCluster 36.16
279 TestMultiControlPlane/serial/RestartCluster 94.92
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.85
281 TestMultiControlPlane/serial/AddSecondaryNode 80.01
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
287 TestJSONOutput/start/Command 74.68
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.81
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 42.27
313 TestKicCustomNetwork/use_default_bridge_network 35.03
314 TestKicExistingNetwork 34.04
315 TestKicCustomSubnet 37.32
316 TestKicStaticIP 37.06
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.71
321 TestMountStart/serial/StartWithMountFirst 8.97
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.93
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.42
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 141.36
333 TestMultiNode/serial/DeployApp2Nodes 5.02
334 TestMultiNode/serial/PingHostFrom2Pods 0.92
335 TestMultiNode/serial/AddNode 56.86
336 TestMultiNode/serial/MultiNodeLabels 0.08
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.57
339 TestMultiNode/serial/StopNode 2.45
340 TestMultiNode/serial/StartAfterStop 8.52
341 TestMultiNode/serial/RestartKeepsNodes 76.58
342 TestMultiNode/serial/DeleteNode 5.76
343 TestMultiNode/serial/StopMultiNode 24.2
344 TestMultiNode/serial/RestartMultiNode 51.9
345 TestMultiNode/serial/ValidateNameConflict 34.92
352 TestScheduledStopUnix 106.68
355 TestInsufficientStorage 12.77
356 TestRunningBinaryUpgrade 303.02
359 TestMissingContainerUpgrade 121.73
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 44.85
363 TestNoKubernetes/serial/StartWithStopK8s 28.3
364 TestNoKubernetes/serial/Start 8.05
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.71
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 7.72
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.66
372 TestStoppedBinaryUpgrade/Upgrade 305.44
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.84
382 TestPause/serial/Start 81.07
383 TestPause/serial/SecondStartNoReconfiguration 17.52
397 TestStartStop/group/old-k8s-version/serial/FirstStart 58.5
398 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
400 TestStartStop/group/old-k8s-version/serial/Stop 12.03
401 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
402 TestStartStop/group/old-k8s-version/serial/SecondStart 50.65
403 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
404 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
405 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
410 TestStartStop/group/embed-certs/serial/FirstStart 85.27
411 TestStartStop/group/embed-certs/serial/DeployApp 10.33
413 TestStartStop/group/embed-certs/serial/Stop 12
414 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
415 TestStartStop/group/embed-certs/serial/SecondStart 50.86
416 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
417 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
418 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
421 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.36
422 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
424 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
425 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
426 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.12
427 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
428 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
429 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
435 TestStartStop/group/no-preload/serial/Stop 1.38
436 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
438 TestStartStop/group/newest-cni/serial/DeployApp 0
440 TestStartStop/group/newest-cni/serial/Stop 1.36
441 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
444 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
445 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
446 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (7.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.476252972s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1208 00:12:40.542861  791807 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1208 00:12:40.542948  791807 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-177412
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-177412: exit status 85 (92.53837ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-177412 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:33.116585  791812 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:33.116771  791812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:33.116803  791812 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:33.116825  791812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:33.117120  791812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	W1208 00:12:33.117281  791812 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22054-789938/.minikube/config/config.json: open /home/jenkins/minikube-integration/22054-789938/.minikube/config/config.json: no such file or directory
	I1208 00:12:33.117736  791812 out.go:368] Setting JSON to true
	I1208 00:12:33.118599  791812 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17685,"bootTime":1765135068,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:12:33.118701  791812 start.go:143] virtualization:  
	I1208 00:12:33.124276  791812 out.go:99] [download-only-177412] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1208 00:12:33.124477  791812 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 00:12:33.124613  791812 notify.go:221] Checking for updates...
	I1208 00:12:33.129011  791812 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:12:33.133470  791812 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:33.136706  791812 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:12:33.139989  791812 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:12:33.143229  791812 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:12:33.149339  791812 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:12:33.149650  791812 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:33.173196  791812 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:33.173318  791812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:33.231673  791812 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-08 00:12:33.222034508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:33.231780  791812 docker.go:319] overlay module found
	I1208 00:12:33.234930  791812 out.go:99] Using the docker driver based on user configuration
	I1208 00:12:33.234975  791812 start.go:309] selected driver: docker
	I1208 00:12:33.234986  791812 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:33.235112  791812 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:33.287991  791812 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-08 00:12:33.279042849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:33.288146  791812 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:33.288429  791812 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:12:33.288579  791812 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:12:33.291836  791812 out.go:171] Using Docker driver with root privileges
	I1208 00:12:33.294992  791812 cni.go:84] Creating CNI manager for ""
	I1208 00:12:33.295067  791812 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:12:33.295083  791812 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:12:33.295163  791812 start.go:353] cluster config:
	{Name:download-only-177412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-177412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:12:33.298391  791812 out.go:99] Starting "download-only-177412" primary control-plane node in "download-only-177412" cluster
	I1208 00:12:33.298407  791812 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:12:33.301251  791812 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:12:33.301286  791812 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 00:12:33.301429  791812 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:12:33.316731  791812 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:33.316924  791812 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:12:33.317025  791812 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:33.368815  791812 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:12:33.368851  791812 cache.go:65] Caching tarball of preloaded images
	I1208 00:12:33.369074  791812 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 00:12:33.372377  791812 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1208 00:12:33.372402  791812 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1208 00:12:33.458472  791812 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1208 00:12:33.458610  791812 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-177412 host does not exist
	  To start a cluster, run: "minikube start -p download-only-177412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-177412
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (6.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-931286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-931286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.980232272s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (6.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1208 00:12:47.958509  791807 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1208 00:12:47.958547  791807 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-931286
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-931286: exit status 85 (95.83396ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-177412 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-177412                                                                                                                                                   │ download-only-177412 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-931286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-931286 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:41.025063  792014 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:41.025185  792014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:41.025196  792014 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:41.025201  792014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:41.025458  792014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:12:41.025856  792014 out.go:368] Setting JSON to true
	I1208 00:12:41.026713  792014 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17693,"bootTime":1765135068,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:12:41.026784  792014 start.go:143] virtualization:  
	I1208 00:12:41.030363  792014 out.go:99] [download-only-931286] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:12:41.030660  792014 notify.go:221] Checking for updates...
	I1208 00:12:41.033472  792014 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:12:41.036481  792014 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:41.039488  792014 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:12:41.042482  792014 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:12:41.045415  792014 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:12:41.051167  792014 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:12:41.051462  792014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:41.076387  792014 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:41.076497  792014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:41.140511  792014 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-08 00:12:41.131147442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:41.140619  792014 docker.go:319] overlay module found
	I1208 00:12:41.143627  792014 out.go:99] Using the docker driver based on user configuration
	I1208 00:12:41.143662  792014 start.go:309] selected driver: docker
	I1208 00:12:41.143671  792014 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:41.143770  792014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:41.200870  792014 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-08 00:12:41.192359043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:41.201027  792014 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:41.201289  792014 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:12:41.201441  792014 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:12:41.204582  792014 out.go:171] Using Docker driver with root privileges
	I1208 00:12:41.207355  792014 cni.go:84] Creating CNI manager for ""
	I1208 00:12:41.207421  792014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:12:41.207433  792014 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:12:41.207507  792014 start.go:353] cluster config:
	{Name:download-only-931286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-931286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:12:41.210503  792014 out.go:99] Starting "download-only-931286" primary control-plane node in "download-only-931286" cluster
	I1208 00:12:41.210519  792014 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:12:41.213399  792014 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:12:41.213439  792014 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:12:41.213596  792014 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:12:41.229010  792014 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:41.229169  792014 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:12:41.229193  792014 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1208 00:12:41.229201  792014 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1208 00:12:41.229208  792014 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1208 00:12:41.270521  792014 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1208 00:12:41.270547  792014 cache.go:65] Caching tarball of preloaded images
	I1208 00:12:41.270730  792014 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 00:12:41.273982  792014 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1208 00:12:41.274013  792014 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1208 00:12:41.377313  792014 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1208 00:12:41.377371  792014 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-931286 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931286"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-931286
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (7.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-670892 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-670892 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.645633966s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (7.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1208 00:12:56.058669  791807 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1208 00:12:56.058706  791807 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-670892
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-670892: exit status 85 (89.22233ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-177412 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-177412 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-177412                                                                                                                                                          │ download-only-177412 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-931286 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-931286 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-931286                                                                                                                                                          │ download-only-931286 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-670892 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-670892 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:48.461766  792213 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:48.461957  792213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:48.461989  792213 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:48.462009  792213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:48.462290  792213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:12:48.462726  792213 out.go:368] Setting JSON to true
	I1208 00:12:48.463602  792213 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17701,"bootTime":1765135068,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:12:48.463705  792213 start.go:143] virtualization:  
	I1208 00:12:48.467073  792213 out.go:99] [download-only-670892] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:12:48.467412  792213 notify.go:221] Checking for updates...
	I1208 00:12:48.470899  792213 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:12:48.474259  792213 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:48.477256  792213 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:12:48.480107  792213 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:12:48.482970  792213 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:12:48.488575  792213 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:12:48.488868  792213 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:48.509818  792213 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:48.509943  792213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:48.579179  792213 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:48.569185035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:48.579295  792213 docker.go:319] overlay module found
	I1208 00:12:48.582352  792213 out.go:99] Using the docker driver based on user configuration
	I1208 00:12:48.582385  792213 start.go:309] selected driver: docker
	I1208 00:12:48.582391  792213 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:48.582489  792213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:48.634430  792213 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:48.625155572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:48.634593  792213 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:48.634906  792213 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:12:48.635059  792213 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:12:48.638253  792213 out.go:171] Using Docker driver with root privileges
	I1208 00:12:48.641153  792213 cni.go:84] Creating CNI manager for ""
	I1208 00:12:48.641232  792213 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 00:12:48.641245  792213 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:12:48.641321  792213 start.go:353] cluster config:
	{Name:download-only-670892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-670892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:12:48.644276  792213 out.go:99] Starting "download-only-670892" primary control-plane node in "download-only-670892" cluster
	I1208 00:12:48.644299  792213 cache.go:134] Beginning downloading kic base image for docker with crio
	I1208 00:12:48.647107  792213 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:12:48.647157  792213 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:12:48.647372  792213 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:12:48.662787  792213 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:12:48.662965  792213 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:12:48.662989  792213 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1208 00:12:48.662997  792213 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1208 00:12:48.663005  792213 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1208 00:12:48.706428  792213 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1208 00:12:48.706463  792213 cache.go:65] Caching tarball of preloaded images
	I1208 00:12:48.706646  792213 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 00:12:48.709683  792213 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1208 00:12:48.709735  792213 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1208 00:12:48.803997  792213 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1208 00:12:48.804059  792213 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22054-789938/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-670892 host does not exist
	  To start a cluster, run: "minikube start -p download-only-670892"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-670892
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1208 00:12:57.363801  791807 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-361883 --alsologtostderr --binary-mirror http://127.0.0.1:39527 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-361883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-361883
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-429840
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-429840: exit status 85 (82.827794ms)

                                                
                                                
-- stdout --
	* Profile "addons-429840" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-429840"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-429840
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-429840: exit status 85 (77.463946ms)

                                                
                                                
-- stdout --
	* Profile "addons-429840" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-429840"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (155.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-429840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-429840 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.48928509s)
--- PASS: TestAddons/Setup (155.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-429840 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-429840 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-429840 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-429840 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [afc3fed4-8cf3-419c-98aa-f797fd69ab0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [afc3fed4-8cf3-419c-98aa-f797fd69ab0e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004006158s
addons_test.go:694: (dbg) Run:  kubectl --context addons-429840 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-429840 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-429840 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-429840 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-429840
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-429840: (12.118790938s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-429840
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-429840
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-429840
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (42.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1208 01:37:46.335741  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-489608 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.531638507s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-489608 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-489608 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-489608 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-489608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-489608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-489608: (2.165189738s)
--- PASS: TestCertOptions (42.49s)

                                                
                                    
x
+
TestCertExpiration (237.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-428091 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.356529955s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-428091 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.954878009s)
helpers_test.go:175: Cleaning up "cert-expiration-428091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-428091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-428091: (2.968450966s)
--- PASS: TestCertExpiration (237.28s)

                                                
                                    
x
+
TestForceSystemdFlag (35.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-279155 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1208 01:35:45.329227  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-279155 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.188842918s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-279155 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-279155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-279155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-279155: (2.49949293s)
--- PASS: TestForceSystemdFlag (35.04s)

                                                
                                    
x
+
TestForceSystemdEnv (44.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-520011 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-520011 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.56705155s)
helpers_test.go:175: Cleaning up "force-systemd-env-520011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-520011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-520011: (3.066690729s)
--- PASS: TestForceSystemdEnv (44.63s)

                                                
                                    
x
+
TestErrorSpam/setup (30.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-485903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-485903 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-485903 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-485903 --driver=docker  --container-runtime=crio: (30.427233209s)
--- PASS: TestErrorSpam/setup (30.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (6.97s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause: exit status 80 (2.396943204s)

                                                
                                                
-- stdout --
	* Pausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause: exit status 80 (2.097888374s)

                                                
                                                
-- stdout --
	* Pausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause: exit status 80 (2.473007063s)

                                                
                                                
-- stdout --
	* Pausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.97s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause: exit status 80 (1.919823839s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause: exit status 80 (1.575498227s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause: exit status 80 (2.20015666s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-485903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T00:19:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.70s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 stop: (1.313668461s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485903 --log_dir /tmp/nospam-485903 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1208 00:20:34.382672  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.389106  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.400555  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.422021  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.463445  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.544942  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:34.706489  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:35.028059  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:35.670106  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:36.951864  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:39.514727  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:44.636203  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:54.878567  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-714395 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.352633761s)
--- PASS: TestFunctional/serial/StartWithProxy (80.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1208 00:21:12.625074  791807 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --alsologtostderr -v=8
E1208 00:21:15.360157  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-714395 --alsologtostderr -v=8: (43.584525935s)
functional_test.go:678: soft start took 43.587397259s for "functional-714395" cluster.
I1208 00:21:56.209941  791807 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (43.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-714395 get po -A
E1208 00:21:56.321959  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:3.1: (1.140909712s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:3.3: (1.196554368s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 cache add registry.k8s.io/pause:latest: (1.108493855s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-714395 /tmp/TestFunctionalserialCacheCmdcacheadd_local1293025447/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache add minikube-local-cache-test:functional-714395
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache delete minikube-local-cache-test:functional-714395
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-714395
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.838771ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 kubectl -- --context functional-714395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-714395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-714395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.359080773s)
functional_test.go:776: restart took 32.35924811s for "functional-714395" cluster.
I1208 00:22:36.494269  791807 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (32.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-714395 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 logs: (1.459614304s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 logs --file /tmp/TestFunctionalserialLogsFileCmd266569979/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 logs --file /tmp/TestFunctionalserialLogsFileCmd266569979/001/logs.txt: (1.483600353s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-714395 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-714395
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-714395: exit status 115 (386.232257ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30936 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-714395 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-714395 delete -f testdata/invalidsvc.yaml: (1.16030615s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 config get cpus: exit status 14 (98.719656ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 config get cpus: exit status 14 (81.244359ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-714395 --alsologtostderr -v=1]
E1208 00:23:18.244249  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-714395 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 817055: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-714395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (185.311203ms)

                                                
                                                
-- stdout --
	* [functional-714395] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:23:15.664493  816782 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:23:15.664706  816782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:15.664734  816782 out.go:374] Setting ErrFile to fd 2...
	I1208 00:23:15.664750  816782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:15.665014  816782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:23:15.665389  816782 out.go:368] Setting JSON to false
	I1208 00:23:15.666354  816782 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18328,"bootTime":1765135068,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:23:15.666450  816782 start.go:143] virtualization:  
	I1208 00:23:15.669604  816782 out.go:179] * [functional-714395] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:23:15.672523  816782 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:23:15.672627  816782 notify.go:221] Checking for updates...
	I1208 00:23:15.679886  816782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:23:15.682669  816782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:23:15.685483  816782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:23:15.688219  816782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:23:15.690997  816782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:23:15.694289  816782 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:23:15.694962  816782 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:23:15.719050  816782 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:23:15.719171  816782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:23:15.783435  816782 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 00:23:15.772958529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:23:15.783545  816782 docker.go:319] overlay module found
	I1208 00:23:15.786615  816782 out.go:179] * Using the docker driver based on existing profile
	I1208 00:23:15.789439  816782 start.go:309] selected driver: docker
	I1208 00:23:15.789456  816782 start.go:927] validating driver "docker" against &{Name:functional-714395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-714395 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:23:15.789571  816782 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:23:15.793036  816782 out.go:203] 
	W1208 00:23:15.795894  816782 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 00:23:15.798719  816782 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-714395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-714395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (216.639679ms)

                                                
                                                
-- stdout --
	* [functional-714395] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:23:15.464199  816738 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:23:15.464334  816738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:15.464345  816738 out.go:374] Setting ErrFile to fd 2...
	I1208 00:23:15.464350  816738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:23:15.465467  816738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:23:15.465877  816738 out.go:368] Setting JSON to false
	I1208 00:23:15.466805  816738 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18328,"bootTime":1765135068,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:23:15.466922  816738 start.go:143] virtualization:  
	I1208 00:23:15.470552  816738 out.go:179] * [functional-714395] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1208 00:23:15.474624  816738 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:23:15.474726  816738 notify.go:221] Checking for updates...
	I1208 00:23:15.480966  816738 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:23:15.483944  816738 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:23:15.487465  816738 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:23:15.490475  816738 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:23:15.493488  816738 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:23:15.496861  816738 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:23:15.497522  816738 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:23:15.530964  816738 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:23:15.531082  816738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:23:15.598689  816738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 00:23:15.588983947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:23:15.598790  816738 docker.go:319] overlay module found
	I1208 00:23:15.601907  816738 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1208 00:23:15.604847  816738 start.go:309] selected driver: docker
	I1208 00:23:15.604867  816738 start.go:927] validating driver "docker" against &{Name:functional-714395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-714395 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:23:15.604972  816738 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:23:15.608424  816738 out.go:203] 
	W1208 00:23:15.611360  816738 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 00:23:15.614204  816738 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-714395 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-714395 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bnjfr" [54739f3c-de5f-42bf-94f8-0fd52d99d157] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bnjfr" [54739f3c-de5f-42bf-94f8-0fd52d99d157] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00375671s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32477
functional_test.go:1680: http://192.168.49.2:32477: success! body:
Request served by hello-node-connect-7d85dfc575-bnjfr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32477
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fc5dc16e-e80a-4638-a796-888f11fa74e8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002940802s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-714395 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-714395 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-714395 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-714395 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [da412a35-4fab-46d1-80a6-5ec9d190a061] Pending
helpers_test.go:352: "sp-pod" [da412a35-4fab-46d1-80a6-5ec9d190a061] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [da412a35-4fab-46d1-80a6-5ec9d190a061] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003571526s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-714395 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-714395 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-714395 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [752ad71b-cd07-4d44-90d3-790dcc787807] Pending
helpers_test.go:352: "sp-pod" [752ad71b-cd07-4d44-90d3-790dcc787807] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004680442s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-714395 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh -n functional-714395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cp functional-714395:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2571368681/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh -n functional-714395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh -n functional-714395 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/791807/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /etc/test/nested/copy/791807/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/791807.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /etc/ssl/certs/791807.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/791807.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /usr/share/ca-certificates/791807.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7918072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /etc/ssl/certs/7918072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7918072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /usr/share/ca-certificates/7918072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-714395 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh "sudo systemctl is-active docker": exit status 1 (376.157313ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh "sudo systemctl is-active containerd": exit status 1 (299.862231ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 814765: os: process already finished
helpers_test.go:519: unable to terminate pid 814549: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-714395 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d5374c59-8800-4da6-9126-8a722283f671] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d5374c59-8800-4da6-9126-8a722283f671] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003549385s
I1208 00:22:55.783306  791807 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-714395 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.246.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-714395 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-714395 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-714395 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-87grz" [22d07162-419e-4f0e-8d10-53d931e1931e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-87grz" [22d07162-419e-4f0e-8d10-53d931e1931e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003649663s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service list -o json
functional_test.go:1504: Took "598.581281ms" to run "out/minikube-linux-arm64 -p functional-714395 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "436.645046ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "85.304298ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30660
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "500.36257ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "109.1381ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30660
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdany-port1157541776/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765153393427076864" to /tmp/TestFunctionalparallelMountCmdany-port1157541776/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765153393427076864" to /tmp/TestFunctionalparallelMountCmdany-port1157541776/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765153393427076864" to /tmp/TestFunctionalparallelMountCmdany-port1157541776/001/test-1765153393427076864
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 00:23 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 00:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 00:23 test-1765153393427076864
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh cat /mount-9p/test-1765153393427076864
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-714395 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [2c4c4770-989c-4547-8340-8dcd4597d0d2] Pending
helpers_test.go:352: "busybox-mount" [2c4c4770-989c-4547-8340-8dcd4597d0d2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [2c4c4770-989c-4547-8340-8dcd4597d0d2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [2c4c4770-989c-4547-8340-8dcd4597d0d2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00336003s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-714395 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdany-port1157541776/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdspecific-port3867160367/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (607.74157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:23:21.922635  791807 retry.go:31] will retry after 336.985162ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdspecific-port3867160367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh "sudo umount -f /mount-9p": exit status 1 (375.335487ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-714395 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdspecific-port3867160367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T" /mount1: exit status 1 (1.001605188s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:23:24.556212  791807 retry.go:31] will retry after 428.644133ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T" /mount1
2025/12/08 00:23:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-714395 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-714395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3159777128/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 version -o=json --components: (1.314917295s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-714395 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-714395
localhost/kicbase/echo-server:functional-714395
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-714395 image ls --format short --alsologtostderr:
I1208 00:23:32.834802  819659 out.go:360] Setting OutFile to fd 1 ...
I1208 00:23:32.835424  819659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:32.835440  819659 out.go:374] Setting ErrFile to fd 2...
I1208 00:23:32.835445  819659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:32.835716  819659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:23:32.836333  819659 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:32.836460  819659 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:32.836963  819659 cli_runner.go:164] Run: docker container inspect functional-714395 --format={{.State.Status}}
I1208 00:23:32.861890  819659 ssh_runner.go:195] Run: systemctl --version
I1208 00:23:32.861946  819659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-714395
I1208 00:23:32.880924  819659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-714395/id_rsa Username:docker}
I1208 00:23:32.989769  819659 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-714395 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-714395  │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test     │ functional-714395  │ 27c10cccf0da4 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-714395 image ls --format table --alsologtostderr:
I1208 00:23:33.131947  819739 out.go:360] Setting OutFile to fd 1 ...
I1208 00:23:33.132244  819739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.132260  819739 out.go:374] Setting ErrFile to fd 2...
I1208 00:23:33.132265  819739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.132605  819739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:23:33.133266  819739 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.133428  819739 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.133988  819739 cli_runner.go:164] Run: docker container inspect functional-714395 --format={{.State.Status}}
I1208 00:23:33.155530  819739 ssh_runner.go:195] Run: systemctl --version
I1208 00:23:33.155607  819739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-714395
I1208 00:23:33.176594  819739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-714395/id_rsa Username:docker}
I1208 00:23:33.290779  819739 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-714395 image ls --format json --alsologtostderr:
[{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31
c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controll
er-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"11133393
8"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"27c10cccf0da4c2af754137e1b4898ac7baf0e1fa0c50625e04701ebdcf0ef5a","repoDigests":["localhost/minikube-local-cache-test@sha256:663a4eb4c8ba77bdf261ffb45784cc9b027af473d6984bcb24711db9363c2d5c"],"repoTags":["localhost/minikube-local-cache-test:functional-714395"],"size":"
3330"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f
1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":
["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-714395"],"size":"4789170"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-714395 image ls --format json --alsologtostderr:
I1208 00:23:33.116909  819734 out.go:360] Setting OutFile to fd 1 ...
I1208 00:23:33.117083  819734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.117106  819734 out.go:374] Setting ErrFile to fd 2...
I1208 00:23:33.117127  819734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.117486  819734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:23:33.118806  819734 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.119044  819734 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.119636  819734 cli_runner.go:164] Run: docker container inspect functional-714395 --format={{.State.Status}}
I1208 00:23:33.140730  819734 ssh_runner.go:195] Run: systemctl --version
I1208 00:23:33.140823  819734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-714395
I1208 00:23:33.177846  819734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-714395/id_rsa Username:docker}
I1208 00:23:33.290472  819734 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-714395 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-714395
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 27c10cccf0da4c2af754137e1b4898ac7baf0e1fa0c50625e04701ebdcf0ef5a
repoDigests:
- localhost/minikube-local-cache-test@sha256:663a4eb4c8ba77bdf261ffb45784cc9b027af473d6984bcb24711db9363c2d5c
repoTags:
- localhost/minikube-local-cache-test:functional-714395
size: "3330"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-714395 image ls --format yaml --alsologtostderr:
I1208 00:23:32.836944  819660 out.go:360] Setting OutFile to fd 1 ...
I1208 00:23:32.837143  819660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:32.837170  819660 out.go:374] Setting ErrFile to fd 2...
I1208 00:23:32.837187  819660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:32.837469  819660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:23:32.838160  819660 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:32.838358  819660 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:32.838948  819660 cli_runner.go:164] Run: docker container inspect functional-714395 --format={{.State.Status}}
I1208 00:23:32.861858  819660 ssh_runner.go:195] Run: systemctl --version
I1208 00:23:32.861915  819660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-714395
I1208 00:23:32.884403  819660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-714395/id_rsa Username:docker}
I1208 00:23:32.994902  819660 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-714395 ssh pgrep buildkitd: exit status 1 (298.381682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr: (3.333481138s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2917e7dade7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-714395
--> 2fcf1a11228
Successfully tagged localhost/my-image:functional-714395
2fcf1a1122861d18c6c7387f608ccbf682208a4836cfcce89cbf3cbb46f3ceab
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-714395 image build -t localhost/my-image:functional-714395 testdata/build --alsologtostderr:
I1208 00:23:33.680838  819869 out.go:360] Setting OutFile to fd 1 ...
I1208 00:23:33.681650  819869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.681670  819869 out.go:374] Setting ErrFile to fd 2...
I1208 00:23:33.681676  819869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:23:33.682012  819869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:23:33.682704  819869 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.683495  819869 config.go:182] Loaded profile config "functional-714395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 00:23:33.684086  819869 cli_runner.go:164] Run: docker container inspect functional-714395 --format={{.State.Status}}
I1208 00:23:33.702292  819869 ssh_runner.go:195] Run: systemctl --version
I1208 00:23:33.702354  819869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-714395
I1208 00:23:33.720146  819869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-714395/id_rsa Username:docker}
I1208 00:23:33.825414  819869 build_images.go:162] Building image from path: /tmp/build.3642444008.tar
I1208 00:23:33.825480  819869 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 00:23:33.833611  819869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3642444008.tar
I1208 00:23:33.837414  819869 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3642444008.tar: stat -c "%s %y" /var/lib/minikube/build/build.3642444008.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3642444008.tar': No such file or directory
I1208 00:23:33.837447  819869 ssh_runner.go:362] scp /tmp/build.3642444008.tar --> /var/lib/minikube/build/build.3642444008.tar (3072 bytes)
I1208 00:23:33.856016  819869 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3642444008
I1208 00:23:33.864853  819869 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3642444008 -xf /var/lib/minikube/build/build.3642444008.tar
I1208 00:23:33.873356  819869 crio.go:315] Building image: /var/lib/minikube/build/build.3642444008
I1208 00:23:33.873431  819869 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-714395 /var/lib/minikube/build/build.3642444008 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1208 00:23:36.934788  819869 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-714395 /var/lib/minikube/build/build.3642444008 --cgroup-manager=cgroupfs: (3.061331067s)
I1208 00:23:36.934884  819869 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3642444008
I1208 00:23:36.942796  819869 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3642444008.tar
I1208 00:23:36.950333  819869 build_images.go:218] Built localhost/my-image:functional-714395 from /tmp/build.3642444008.tar
I1208 00:23:36.950364  819869 build_images.go:134] succeeded building to: functional-714395
I1208 00:23:36.950370  819869 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-714395
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr: (1.057078247s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-714395
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image load --daemon kicbase/echo-server:functional-714395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image save kicbase/echo-server:functional-714395 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image rm kicbase/echo-server:functional-714395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-714395
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-714395 image save --daemon kicbase/echo-server:functional-714395 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-714395
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-714395
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-714395
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-714395
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-789938/.minikube/files/etc/test/nested/copy/791807/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:3.1: (1.151465561s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:3.3: (1.17359732s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 cache add registry.k8s.io/pause:latest: (1.13536844s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3639259307/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache add minikube-local-cache-test:functional-525396
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache delete minikube-local-cache-test:functional-525396
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.008719ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs875642491/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 config get cpus: exit status 14 (70.041863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 config get cpus: exit status 14 (89.894276ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (194.835958ms)

                                                
                                                
-- stdout --
	* [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:52:45.362957  849002 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:52:45.363133  849002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.363165  849002 out.go:374] Setting ErrFile to fd 2...
	I1208 00:52:45.363185  849002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.363482  849002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:52:45.363883  849002 out.go:368] Setting JSON to false
	I1208 00:52:45.364765  849002 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20098,"bootTime":1765135068,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:52:45.364880  849002 start.go:143] virtualization:  
	I1208 00:52:45.368373  849002 out.go:179] * [functional-525396] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:52:45.371525  849002 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:52:45.371666  849002 notify.go:221] Checking for updates...
	I1208 00:52:45.377630  849002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:52:45.380612  849002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:52:45.383616  849002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:52:45.386448  849002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:52:45.389363  849002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:52:45.392678  849002 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:52:45.393386  849002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:52:45.426565  849002 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:52:45.426721  849002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.480436  849002 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.471336783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.480547  849002 docker.go:319] overlay module found
	I1208 00:52:45.483642  849002 out.go:179] * Using the docker driver based on existing profile
	I1208 00:52:45.486519  849002 start.go:309] selected driver: docker
	I1208 00:52:45.486535  849002 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.486643  849002 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:52:45.490219  849002 out.go:203] 
	W1208 00:52:45.492953  849002 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 00:52:45.495756  849002 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-525396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (321.691659ms)

                                                
                                                
-- stdout --
	* [functional-525396] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:52:45.072073  848956 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:52:45.072240  848956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.072252  848956 out.go:374] Setting ErrFile to fd 2...
	I1208 00:52:45.072258  848956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:52:45.072693  848956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:52:45.073202  848956 out.go:368] Setting JSON to false
	I1208 00:52:45.074234  848956 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20097,"bootTime":1765135068,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1208 00:52:45.074317  848956 start.go:143] virtualization:  
	I1208 00:52:45.078156  848956 out.go:179] * [functional-525396] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1208 00:52:45.081441  848956 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:52:45.081541  848956 notify.go:221] Checking for updates...
	I1208 00:52:45.093071  848956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:52:45.096251  848956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	I1208 00:52:45.099372  848956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	I1208 00:52:45.103064  848956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:52:45.106300  848956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:52:45.109861  848956 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 00:52:45.110582  848956 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:52:45.168987  848956 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:52:45.169182  848956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:52:45.282979  848956 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:52:45.265567113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:52:45.283111  848956 docker.go:319] overlay module found
	I1208 00:52:45.286355  848956 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1208 00:52:45.289776  848956 start.go:309] selected driver: docker
	I1208 00:52:45.289806  848956 start.go:927] validating driver "docker" against &{Name:functional-525396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-525396 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:52:45.289918  848956 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:52:45.293578  848956 out.go:203] 
	W1208 00:52:45.296616  848956 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 00:52:45.299426  848956 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh -n functional-525396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cp functional-525396:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp59427023/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh -n functional-525396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh -n functional-525396 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/791807/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /etc/test/nested/copy/791807/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/791807.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /etc/ssl/certs/791807.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/791807.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /usr/share/ca-certificates/791807.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7918072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /etc/ssl/certs/7918072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7918072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /usr/share/ca-certificates/7918072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "sudo systemctl is-active docker": exit status 1 (280.590971ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "sudo systemctl is-active containerd": exit status 1 (275.927384ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-525396 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "351.408588ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.007655ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "337.44928ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.820135ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3736246558/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.189076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:52:38.982698  791807 retry.go:31] will retry after 521.122435ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3736246558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh "sudo umount -f /mount-9p": exit status 1 (302.335899ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-525396 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3736246558/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-525396 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-525396 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1241134368/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-525396 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-525396
localhost/kicbase/echo-server:functional-525396
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-525396 image ls --format short --alsologtostderr:
I1208 00:52:59.504079  851534 out.go:360] Setting OutFile to fd 1 ...
I1208 00:52:59.504211  851534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:59.504222  851534 out.go:374] Setting ErrFile to fd 2...
I1208 00:52:59.504228  851534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:59.504489  851534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:52:59.505122  851534 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:59.505250  851534 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:59.505764  851534 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:52:59.522626  851534 ssh_runner.go:195] Run: systemctl --version
I1208 00:52:59.522689  851534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:52:59.539117  851534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:52:59.641437  851534 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-525396 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ localhost/minikube-local-cache-test     │ functional-525396  │ 27c10cccf0da4 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/my-image                      │ functional-525396  │ b73501ab51d1c │ 1.64MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/kicbase/echo-server           │ functional-525396  │ ce2d2cda2d858 │ 4.79MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-525396 image ls --format table --alsologtostderr:
I1208 00:53:04.291593  852030 out.go:360] Setting OutFile to fd 1 ...
I1208 00:53:04.292374  852030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:04.292408  852030 out.go:374] Setting ErrFile to fd 2...
I1208 00:53:04.292431  852030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:04.292713  852030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:53:04.293371  852030 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:04.293588  852030 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:04.294177  852030 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:53:04.318115  852030 ssh_runner.go:195] Run: systemctl --version
I1208 00:53:04.318174  852030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:53:04.338065  852030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:53:04.441529  852030 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-525396 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-525396"],"size":"4788229"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.3
5.0-beta.0"],"size":"84949999"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c87805
5d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"27c10cccf0da4c2af754137e1b4898ac7baf0e1fa0c50625e04701ebdcf0ef5a","repoDigests":["localhost/minikube-local-cache-test@sha256:663a4eb4c8ba77bdf261ffb45784cc9b027af473d6984bcb24711db9363c2d5c"],"repoTags":["localhost/minikube-local-cache-test:functional-525396"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439b
f15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6
e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-525396 image ls --format json --alsologtostderr:
I1208 00:52:59.729420  851572 out.go:360] Setting OutFile to fd 1 ...
I1208 00:52:59.729637  851572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:59.729668  851572 out.go:374] Setting ErrFile to fd 2...
I1208 00:52:59.729689  851572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:52:59.729948  851572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:52:59.730577  851572 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:59.730750  851572 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:52:59.731349  851572 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:52:59.748639  851572 ssh_runner.go:195] Run: systemctl --version
I1208 00:52:59.748699  851572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:52:59.782951  851572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:52:59.889528  851572 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-525396 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-525396
size: "4788229"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: b73501ab51d1cbf5df107044f321051f71f4543cb2b01b4cb6b9c638c787f186
repoDigests:
- localhost/my-image@sha256:9fee7a6c13a219e405a06d353656aa4324f47559c138f3c938d97fcc24ed52c5
repoTags:
- localhost/my-image:functional-525396
size: "1640791"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 9aaa10a7c5eb6e5e3c6df3e69e08881d609c5ff22451f30f044caf33cead4244
repoDigests:
- docker.io/library/bea26e2123ca1bc1edcce0e90c2510c0763274259b40e0c7fb65adf140be23c8-tmp@sha256:0adb39107cf19adb281c64fbf0cc1a399809e799c69859febaf7de24d207c777
repoTags: []
size: "1638178"
- id: 27c10cccf0da4c2af754137e1b4898ac7baf0e1fa0c50625e04701ebdcf0ef5a
repoDigests:
- localhost/minikube-local-cache-test@sha256:663a4eb4c8ba77bdf261ffb45784cc9b027af473d6984bcb24711db9363c2d5c
repoTags:
- localhost/minikube-local-cache-test:functional-525396
size: "3330"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-525396 image ls --format yaml --alsologtostderr:
I1208 00:53:04.050008  851994 out.go:360] Setting OutFile to fd 1 ...
I1208 00:53:04.050230  851994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:04.050263  851994 out.go:374] Setting ErrFile to fd 2...
I1208 00:53:04.050287  851994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:04.050577  851994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:53:04.051342  851994 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:04.051529  851994 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:04.052108  851994 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:53:04.071222  851994 ssh_runner.go:195] Run: systemctl --version
I1208 00:53:04.071285  851994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:53:04.089928  851994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:53:04.193546  851994 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-525396 ssh pgrep buildkitd: exit status 1 (496.936303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image build -t localhost/my-image:functional-525396 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-525396 image build -t localhost/my-image:functional-525396 testdata/build --alsologtostderr: (3.335085898s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-525396 image build -t localhost/my-image:functional-525396 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9aaa10a7c5e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-525396
--> b73501ab51d
Successfully tagged localhost/my-image:functional-525396
b73501ab51d1cbf5df107044f321051f71f4543cb2b01b4cb6b9c638c787f186
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-525396 image build -t localhost/my-image:functional-525396 testdata/build --alsologtostderr:
I1208 00:53:00.473797  851678 out.go:360] Setting OutFile to fd 1 ...
I1208 00:53:00.473936  851678 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:00.473947  851678 out.go:374] Setting ErrFile to fd 2...
I1208 00:53:00.473952  851678 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:53:00.474215  851678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
I1208 00:53:00.474887  851678 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:00.475630  851678 config.go:182] Loaded profile config "functional-525396": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 00:53:00.476191  851678 cli_runner.go:164] Run: docker container inspect functional-525396 --format={{.State.Status}}
I1208 00:53:00.494078  851678 ssh_runner.go:195] Run: systemctl --version
I1208 00:53:00.494136  851678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-525396
I1208 00:53:00.518359  851678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/functional-525396/id_rsa Username:docker}
I1208 00:53:00.625454  851678 build_images.go:162] Building image from path: /tmp/build.3354750820.tar
I1208 00:53:00.625530  851678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 00:53:00.633137  851678 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3354750820.tar
I1208 00:53:00.636737  851678 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3354750820.tar: stat -c "%s %y" /var/lib/minikube/build/build.3354750820.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3354750820.tar': No such file or directory
I1208 00:53:00.636769  851678 ssh_runner.go:362] scp /tmp/build.3354750820.tar --> /var/lib/minikube/build/build.3354750820.tar (3072 bytes)
I1208 00:53:00.654553  851678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3354750820
I1208 00:53:00.662170  851678 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3354750820 -xf /var/lib/minikube/build/build.3354750820.tar
I1208 00:53:00.670636  851678 crio.go:315] Building image: /var/lib/minikube/build/build.3354750820
I1208 00:53:00.670719  851678 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-525396 /var/lib/minikube/build/build.3354750820 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1208 00:53:03.729989  851678 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-525396 /var/lib/minikube/build/build.3354750820 --cgroup-manager=cgroupfs: (3.059239556s)
I1208 00:53:03.730073  851678 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3354750820
I1208 00:53:03.738408  851678 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3354750820.tar
I1208 00:53:03.746229  851678 build_images.go:218] Built localhost/my-image:functional-525396 from /tmp/build.3354750820.tar
I1208 00:53:03.746260  851678 build_images.go:134] succeeded building to: functional-525396
I1208 00:53:03.746265  851678 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-525396
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image load --daemon kicbase/echo-server:functional-525396 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image save kicbase/echo-server:functional-525396 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image rm kicbase/echo-server:functional-525396 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-525396
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 image save --daemon kicbase/echo-server:functional-525396 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-525396 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-525396
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1208 00:55:34.380032  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.329654  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.336081  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.347454  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.368825  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.410269  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.491684  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.653138  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:45.974785  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:46.616882  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:47.898129  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:50.460059  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:55.581902  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:56:05.823903  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:56:26.306069  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:57:07.268090  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:57:46.335623  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m16.653867969s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (197.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 kubectl -- rollout status deployment/busybox: (4.745310345s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-7jpwk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-nj6wv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-vpzhq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-7jpwk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-nj6wv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-vpzhq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-7jpwk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-nj6wv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-vpzhq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-7jpwk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-7jpwk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-nj6wv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-nj6wv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-vpzhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 kubectl -- exec busybox-7b57f96db7-vpzhq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node add --alsologtostderr -v 5
E1208 00:58:29.192035  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 node add --alsologtostderr -v 5: (58.089161785s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5: (1.081986379s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-766466 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.054091818s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 status --output json --alsologtostderr -v 5: (1.039962365s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp testdata/cp-test.txt ha-766466:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111698866/001/cp-test_ha-766466.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466:/home/docker/cp-test.txt ha-766466-m02:/home/docker/cp-test_ha-766466_ha-766466-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test_ha-766466_ha-766466-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466:/home/docker/cp-test.txt ha-766466-m03:/home/docker/cp-test_ha-766466_ha-766466-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test_ha-766466_ha-766466-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466:/home/docker/cp-test.txt ha-766466-m04:/home/docker/cp-test_ha-766466_ha-766466-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test_ha-766466_ha-766466-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp testdata/cp-test.txt ha-766466-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111698866/001/cp-test_ha-766466-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m02:/home/docker/cp-test.txt ha-766466:/home/docker/cp-test_ha-766466-m02_ha-766466.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test_ha-766466-m02_ha-766466.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m02:/home/docker/cp-test.txt ha-766466-m03:/home/docker/cp-test_ha-766466-m02_ha-766466-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test_ha-766466-m02_ha-766466-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m02:/home/docker/cp-test.txt ha-766466-m04:/home/docker/cp-test_ha-766466-m02_ha-766466-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test_ha-766466-m02_ha-766466-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp testdata/cp-test.txt ha-766466-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111698866/001/cp-test_ha-766466-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m03:/home/docker/cp-test.txt ha-766466:/home/docker/cp-test_ha-766466-m03_ha-766466.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test_ha-766466-m03_ha-766466.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m03:/home/docker/cp-test.txt ha-766466-m02:/home/docker/cp-test_ha-766466-m03_ha-766466-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test_ha-766466-m03_ha-766466-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m03:/home/docker/cp-test.txt ha-766466-m04:/home/docker/cp-test_ha-766466-m03_ha-766466-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test_ha-766466-m03_ha-766466-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp testdata/cp-test.txt ha-766466-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4111698866/001/cp-test_ha-766466-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m04:/home/docker/cp-test.txt ha-766466:/home/docker/cp-test_ha-766466-m04_ha-766466.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466 "sudo cat /home/docker/cp-test_ha-766466-m04_ha-766466.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m04:/home/docker/cp-test.txt ha-766466-m02:/home/docker/cp-test_ha-766466-m04_ha-766466-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m02 "sudo cat /home/docker/cp-test_ha-766466-m04_ha-766466-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 cp ha-766466-m04:/home/docker/cp-test.txt ha-766466-m03:/home/docker/cp-test_ha-766466-m04_ha-766466-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 ssh -n ha-766466-m03 "sudo cat /home/docker/cp-test_ha-766466-m04_ha-766466-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 node stop m02 --alsologtostderr -v 5: (12.039371568s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5: exit status 7 (815.301204ms)

                                                
                                                
-- stdout --
	ha-766466
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-766466-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766466-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-766466-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:59:48.167353  867780 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:59:48.167578  867780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:59:48.167621  867780 out.go:374] Setting ErrFile to fd 2...
	I1208 00:59:48.167642  867780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:59:48.168106  867780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 00:59:48.168369  867780 out.go:368] Setting JSON to false
	I1208 00:59:48.168441  867780 mustload.go:66] Loading cluster: ha-766466
	I1208 00:59:48.168616  867780 notify.go:221] Checking for updates...
	I1208 00:59:48.169095  867780 config.go:182] Loaded profile config "ha-766466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 00:59:48.169134  867780 status.go:174] checking status of ha-766466 ...
	I1208 00:59:48.170396  867780 cli_runner.go:164] Run: docker container inspect ha-766466 --format={{.State.Status}}
	I1208 00:59:48.191733  867780 status.go:371] ha-766466 host status = "Running" (err=<nil>)
	I1208 00:59:48.191758  867780 host.go:66] Checking if "ha-766466" exists ...
	I1208 00:59:48.192185  867780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-766466
	I1208 00:59:48.224533  867780 host.go:66] Checking if "ha-766466" exists ...
	I1208 00:59:48.224869  867780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:59:48.224920  867780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-766466
	I1208 00:59:48.243806  867780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/ha-766466/id_rsa Username:docker}
	I1208 00:59:48.353096  867780 ssh_runner.go:195] Run: systemctl --version
	I1208 00:59:48.360027  867780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:59:48.376788  867780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:59:48.439738  867780 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-08 00:59:48.429514988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:59:48.440270  867780 kubeconfig.go:125] found "ha-766466" server: "https://192.168.49.254:8443"
	I1208 00:59:48.440314  867780 api_server.go:166] Checking apiserver status ...
	I1208 00:59:48.440369  867780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:59:48.455426  867780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	I1208 00:59:48.466138  867780 api_server.go:182] apiserver freezer: "13:freezer:/docker/c84b57099ac2370549c430e12dfd4a7d555175e00ebed22cb1cbdcd37e19632f/crio/crio-f48e6f53edfd577ce06cbe6bafdb4b2f8f78d11b7c4724aec12598c768e835bc"
	I1208 00:59:48.466211  867780 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c84b57099ac2370549c430e12dfd4a7d555175e00ebed22cb1cbdcd37e19632f/crio/crio-f48e6f53edfd577ce06cbe6bafdb4b2f8f78d11b7c4724aec12598c768e835bc/freezer.state
	I1208 00:59:48.474610  867780 api_server.go:204] freezer state: "THAWED"
	I1208 00:59:48.474647  867780 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1208 00:59:48.482989  867780 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1208 00:59:48.483022  867780 status.go:463] ha-766466 apiserver status = Running (err=<nil>)
	I1208 00:59:48.483033  867780 status.go:176] ha-766466 status: &{Name:ha-766466 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:59:48.483049  867780 status.go:174] checking status of ha-766466-m02 ...
	I1208 00:59:48.483380  867780 cli_runner.go:164] Run: docker container inspect ha-766466-m02 --format={{.State.Status}}
	I1208 00:59:48.502368  867780 status.go:371] ha-766466-m02 host status = "Stopped" (err=<nil>)
	I1208 00:59:48.502394  867780 status.go:384] host is not running, skipping remaining checks
	I1208 00:59:48.502401  867780 status.go:176] ha-766466-m02 status: &{Name:ha-766466-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:59:48.502421  867780 status.go:174] checking status of ha-766466-m03 ...
	I1208 00:59:48.502886  867780 cli_runner.go:164] Run: docker container inspect ha-766466-m03 --format={{.State.Status}}
	I1208 00:59:48.525564  867780 status.go:371] ha-766466-m03 host status = "Running" (err=<nil>)
	I1208 00:59:48.525592  867780 host.go:66] Checking if "ha-766466-m03" exists ...
	I1208 00:59:48.525904  867780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-766466-m03
	I1208 00:59:48.554395  867780 host.go:66] Checking if "ha-766466-m03" exists ...
	I1208 00:59:48.554728  867780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:59:48.554778  867780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-766466-m03
	I1208 00:59:48.586544  867780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/ha-766466-m03/id_rsa Username:docker}
	I1208 00:59:48.695085  867780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:59:48.708888  867780 kubeconfig.go:125] found "ha-766466" server: "https://192.168.49.254:8443"
	I1208 00:59:48.708918  867780 api_server.go:166] Checking apiserver status ...
	I1208 00:59:48.708960  867780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:59:48.720917  867780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1199/cgroup
	I1208 00:59:48.734167  867780 api_server.go:182] apiserver freezer: "13:freezer:/docker/273c2974dd7408b2ee423788b1284b4f448f50f9c986b06b571f932565f2f958/crio/crio-7bbe81487b8ea63503103a9382cec71d8faef8f67f42ed90567fe981c93dc668"
	I1208 00:59:48.734270  867780 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/273c2974dd7408b2ee423788b1284b4f448f50f9c986b06b571f932565f2f958/crio/crio-7bbe81487b8ea63503103a9382cec71d8faef8f67f42ed90567fe981c93dc668/freezer.state
	I1208 00:59:48.743037  867780 api_server.go:204] freezer state: "THAWED"
	I1208 00:59:48.743078  867780 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1208 00:59:48.751329  867780 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1208 00:59:48.751359  867780 status.go:463] ha-766466-m03 apiserver status = Running (err=<nil>)
	I1208 00:59:48.751368  867780 status.go:176] ha-766466-m03 status: &{Name:ha-766466-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:59:48.751385  867780 status.go:174] checking status of ha-766466-m04 ...
	I1208 00:59:48.751704  867780 cli_runner.go:164] Run: docker container inspect ha-766466-m04 --format={{.State.Status}}
	I1208 00:59:48.769232  867780 status.go:371] ha-766466-m04 host status = "Running" (err=<nil>)
	I1208 00:59:48.769258  867780 host.go:66] Checking if "ha-766466-m04" exists ...
	I1208 00:59:48.769552  867780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-766466-m04
	I1208 00:59:48.787077  867780 host.go:66] Checking if "ha-766466-m04" exists ...
	I1208 00:59:48.787386  867780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:59:48.787436  867780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-766466-m04
	I1208 00:59:48.806120  867780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33532 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/ha-766466-m04/id_rsa Username:docker}
	I1208 00:59:48.912837  867780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:59:48.926424  867780 status.go:176] ha-766466-m04 status: &{Name:ha-766466-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 node start m02 --alsologtostderr -v 5: (28.900065531s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5: (1.426428788s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.311189738s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 stop --alsologtostderr -v 5
E1208 01:00:34.383056  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:00:45.329313  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 stop --alsologtostderr -v 5: (27.484640105s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 start --wait true --alsologtostderr -v 5
E1208 01:00:49.405729  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:01:13.033425  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 start --wait true --alsologtostderr -v 5: (1m35.747541381s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 node delete m03 --alsologtostderr -v 5: (11.020811089s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 stop --alsologtostderr -v 5
E1208 01:02:46.335744  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 stop --alsologtostderr -v 5: (36.031574402s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5: exit status 7 (123.907272ms)

                                                
                                                
-- stdout --
	ha-766466
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766466-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-766466-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:03:13.877428  879717 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:03:13.877557  879717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:03:13.877568  879717 out.go:374] Setting ErrFile to fd 2...
	I1208 01:03:13.877574  879717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:03:13.877916  879717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:03:13.878133  879717 out.go:368] Setting JSON to false
	I1208 01:03:13.878162  879717 mustload.go:66] Loading cluster: ha-766466
	I1208 01:03:13.878830  879717 config.go:182] Loaded profile config "ha-766466": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:03:13.879145  879717 status.go:174] checking status of ha-766466 ...
	I1208 01:03:13.879486  879717 notify.go:221] Checking for updates...
	I1208 01:03:13.879719  879717 cli_runner.go:164] Run: docker container inspect ha-766466 --format={{.State.Status}}
	I1208 01:03:13.899196  879717 status.go:371] ha-766466 host status = "Stopped" (err=<nil>)
	I1208 01:03:13.899217  879717 status.go:384] host is not running, skipping remaining checks
	I1208 01:03:13.899223  879717 status.go:176] ha-766466 status: &{Name:ha-766466 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:03:13.899252  879717 status.go:174] checking status of ha-766466-m02 ...
	I1208 01:03:13.899554  879717 cli_runner.go:164] Run: docker container inspect ha-766466-m02 --format={{.State.Status}}
	I1208 01:03:13.929915  879717 status.go:371] ha-766466-m02 host status = "Stopped" (err=<nil>)
	I1208 01:03:13.929937  879717 status.go:384] host is not running, skipping remaining checks
	I1208 01:03:13.929944  879717 status.go:176] ha-766466-m02 status: &{Name:ha-766466-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:03:13.929963  879717 status.go:174] checking status of ha-766466-m04 ...
	I1208 01:03:13.930267  879717 cli_runner.go:164] Run: docker container inspect ha-766466-m04 --format={{.State.Status}}
	I1208 01:03:13.947177  879717 status.go:371] ha-766466-m04 host status = "Stopped" (err=<nil>)
	I1208 01:03:13.947198  879717 status.go:384] host is not running, skipping remaining checks
	I1208 01:03:13.947204  879717 status.go:176] ha-766466-m04 status: &{Name:ha-766466-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m33.930602229s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 node add --control-plane --alsologtostderr -v 5
E1208 01:05:34.379464  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:05:45.329276  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 node add --control-plane --alsologtostderr -v 5: (1m18.923472461s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-766466 status --alsologtostderr -v 5: (1.081772392s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.061075877s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-485214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-485214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m14.666697289s)
--- PASS: TestJSONOutput/start/Command (74.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-485214 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-485214 --output=json --user=testUser: (5.812654965s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-914555 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-914555 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.836699ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6454708-1577-4458-b716-2ebcdc807d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-914555] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c06e0e46-b0ef-4709-a776-d10e5caf115f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"9daa00a0-cae7-4cd8-9023-d51e097b7534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae53c794-c4d0-48b8-ad25-631430fbe261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig"}}
	{"specversion":"1.0","id":"602735b1-c8c6-4be7-b851-4ebd69c12298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube"}}
	{"specversion":"1.0","id":"7bb7dc95-4ad5-4a0f-be1e-b7a28df143c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a988d7fc-e7bb-44f2-8b49-859e4947a5c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21e72bd5-a929-4d7d-815d-21803c5e639f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-914555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-914555
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-452883 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-452883 --network=: (39.985380563s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-452883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-452883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-452883: (2.258444668s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-845893 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-845893 --network=bridge: (32.868479418s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-845893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-845893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-845893: (2.142663667s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.03s)

                                                
                                    
x
+
TestKicExistingNetwork (34.04s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1208 01:09:06.290562  791807 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1208 01:09:06.306680  791807 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1208 01:09:06.306808  791807 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1208 01:09:06.306834  791807 cli_runner.go:164] Run: docker network inspect existing-network
W1208 01:09:06.322433  791807 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1208 01:09:06.322473  791807 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1208 01:09:06.322490  791807 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1208 01:09:06.322606  791807 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 01:09:06.339250  791807 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3e23fb058f94 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:9c:56:fc:18:e4} reservation:<nil>}
I1208 01:09:06.339592  791807 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b5050}
I1208 01:09:06.339619  791807 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1208 01:09:06.339668  791807 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1208 01:09:06.398188  791807 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-218494 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-218494 --network=existing-network: (31.787025803s)
helpers_test.go:175: Cleaning up "existing-network-218494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-218494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-218494: (2.112541744s)
I1208 01:09:40.314658  791807 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.04s)

                                                
                                    
x
+
TestKicCustomSubnet (37.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-149803 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-149803 --subnet=192.168.60.0/24: (35.038250212s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-149803 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-149803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-149803
E1208 01:10:17.457006  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-149803: (2.255334905s)
--- PASS: TestKicCustomSubnet (37.32s)

                                                
                                    
x
+
TestKicStaticIP (37.06s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-731880 --static-ip=192.168.200.200
E1208 01:10:34.384045  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:10:45.330240  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-731880 --static-ip=192.168.200.200: (34.667180333s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-731880 ip
helpers_test.go:175: Cleaning up "static-ip-731880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-731880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-731880: (2.228338058s)
--- PASS: TestKicStaticIP (37.06s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-934109 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-934109 --driver=docker  --container-runtime=crio: (31.200880307s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-936554 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-936554 --driver=docker  --container-runtime=crio: (33.545215611s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-934109
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-936554
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-936554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-936554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-936554: (2.100088315s)
helpers_test.go:175: Cleaning up "first-934109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-934109
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-934109: (2.066849196s)
--- PASS: TestMinikubeProfile (70.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-328338 --memory=3072 --mount-string /tmp/TestMountStartserial2424678269/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1208 01:12:08.395006  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-328338 --memory=3072 --mount-string /tmp/TestMountStartserial2424678269/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.964421861s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-328338 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-330225 --memory=3072 --mount-string /tmp/TestMountStartserial2424678269/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-330225 --memory=3072 --mount-string /tmp/TestMountStartserial2424678269/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.925362472s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-328338 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-328338 --alsologtostderr -v=5: (1.711971116s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-330225
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-330225: (1.293202365s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-330225
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-330225: (7.416411839s)
--- PASS: TestMountStart/serial/RestartStopped (8.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-263003 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1208 01:12:46.335460  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-263003 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.817247967s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-263003 -- rollout status deployment/busybox: (3.238984641s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-bqfvw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-fgjpd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-bqfvw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-fgjpd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-bqfvw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-fgjpd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-bqfvw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-bqfvw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-fgjpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-263003 -- exec busybox-7b57f96db7-fgjpd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-263003 -v=5 --alsologtostderr
E1208 01:15:34.379512  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:15:45.329281  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-263003 -v=5 --alsologtostderr: (56.159311837s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-263003 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp testdata/cp-test.txt multinode-263003:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1234512322/001/cp-test_multinode-263003.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003:/home/docker/cp-test.txt multinode-263003-m02:/home/docker/cp-test_multinode-263003_multinode-263003-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test_multinode-263003_multinode-263003-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003:/home/docker/cp-test.txt multinode-263003-m03:/home/docker/cp-test_multinode-263003_multinode-263003-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test_multinode-263003_multinode-263003-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp testdata/cp-test.txt multinode-263003-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1234512322/001/cp-test_multinode-263003-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m02:/home/docker/cp-test.txt multinode-263003:/home/docker/cp-test_multinode-263003-m02_multinode-263003.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test_multinode-263003-m02_multinode-263003.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m02:/home/docker/cp-test.txt multinode-263003-m03:/home/docker/cp-test_multinode-263003-m02_multinode-263003-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test_multinode-263003-m02_multinode-263003-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp testdata/cp-test.txt multinode-263003-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1234512322/001/cp-test_multinode-263003-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m03:/home/docker/cp-test.txt multinode-263003:/home/docker/cp-test_multinode-263003-m03_multinode-263003.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003 "sudo cat /home/docker/cp-test_multinode-263003-m03_multinode-263003.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 cp multinode-263003-m03:/home/docker/cp-test.txt multinode-263003-m02:/home/docker/cp-test_multinode-263003-m03_multinode-263003-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 ssh -n multinode-263003-m02 "sudo cat /home/docker/cp-test_multinode-263003-m03_multinode-263003-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-263003 node stop m03: (1.32282682s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-263003 status: exit status 7 (565.341931ms)

                                                
                                                
-- stdout --
	multinode-263003
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263003-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263003-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr: exit status 7 (563.538363ms)

                                                
                                                
-- stdout --
	multinode-263003
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263003-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263003-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:16:15.260907  930198 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:16:15.261028  930198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:16:15.261039  930198 out.go:374] Setting ErrFile to fd 2...
	I1208 01:16:15.261044  930198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:16:15.261302  930198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:16:15.261491  930198 out.go:368] Setting JSON to false
	I1208 01:16:15.261534  930198 mustload.go:66] Loading cluster: multinode-263003
	I1208 01:16:15.261612  930198 notify.go:221] Checking for updates...
	I1208 01:16:15.262507  930198 config.go:182] Loaded profile config "multinode-263003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:16:15.262529  930198 status.go:174] checking status of multinode-263003 ...
	I1208 01:16:15.263189  930198 cli_runner.go:164] Run: docker container inspect multinode-263003 --format={{.State.Status}}
	I1208 01:16:15.282677  930198 status.go:371] multinode-263003 host status = "Running" (err=<nil>)
	I1208 01:16:15.282703  930198 host.go:66] Checking if "multinode-263003" exists ...
	I1208 01:16:15.283124  930198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-263003
	I1208 01:16:15.316883  930198 host.go:66] Checking if "multinode-263003" exists ...
	I1208 01:16:15.317193  930198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:16:15.317250  930198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-263003
	I1208 01:16:15.336742  930198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33637 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/multinode-263003/id_rsa Username:docker}
	I1208 01:16:15.444521  930198 ssh_runner.go:195] Run: systemctl --version
	I1208 01:16:15.452235  930198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:16:15.467395  930198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:16:15.531819  930198 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:16:15.522508503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:16:15.532390  930198 kubeconfig.go:125] found "multinode-263003" server: "https://192.168.67.2:8443"
	I1208 01:16:15.532432  930198 api_server.go:166] Checking apiserver status ...
	I1208 01:16:15.532480  930198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:16:15.544332  930198 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1260/cgroup
	I1208 01:16:15.553284  930198 api_server.go:182] apiserver freezer: "13:freezer:/docker/d5eaae031a07fddf8b2df93b8aa8947a1799e330eb12c474badd21c9be637470/crio/crio-f8ed2583d61e4ad8deb86b60e9af14abfb85c18b88e8abe0699b486499c94b9e"
	I1208 01:16:15.553359  930198 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5eaae031a07fddf8b2df93b8aa8947a1799e330eb12c474badd21c9be637470/crio/crio-f8ed2583d61e4ad8deb86b60e9af14abfb85c18b88e8abe0699b486499c94b9e/freezer.state
	I1208 01:16:15.561663  930198 api_server.go:204] freezer state: "THAWED"
	I1208 01:16:15.561691  930198 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1208 01:16:15.570008  930198 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1208 01:16:15.570047  930198 status.go:463] multinode-263003 apiserver status = Running (err=<nil>)
	I1208 01:16:15.570092  930198 status.go:176] multinode-263003 status: &{Name:multinode-263003 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:16:15.570131  930198 status.go:174] checking status of multinode-263003-m02 ...
	I1208 01:16:15.570491  930198 cli_runner.go:164] Run: docker container inspect multinode-263003-m02 --format={{.State.Status}}
	I1208 01:16:15.588989  930198 status.go:371] multinode-263003-m02 host status = "Running" (err=<nil>)
	I1208 01:16:15.589016  930198 host.go:66] Checking if "multinode-263003-m02" exists ...
	I1208 01:16:15.589389  930198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-263003-m02
	I1208 01:16:15.606708  930198 host.go:66] Checking if "multinode-263003-m02" exists ...
	I1208 01:16:15.607134  930198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:16:15.607210  930198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-263003-m02
	I1208 01:16:15.624121  930198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33642 SSHKeyPath:/home/jenkins/minikube-integration/22054-789938/.minikube/machines/multinode-263003-m02/id_rsa Username:docker}
	I1208 01:16:15.732259  930198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:16:15.744928  930198 status.go:176] multinode-263003-m02 status: &{Name:multinode-263003-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:16:15.744959  930198 status.go:174] checking status of multinode-263003-m03 ...
	I1208 01:16:15.745295  930198 cli_runner.go:164] Run: docker container inspect multinode-263003-m03 --format={{.State.Status}}
	I1208 01:16:15.762949  930198 status.go:371] multinode-263003-m03 host status = "Stopped" (err=<nil>)
	I1208 01:16:15.762974  930198 status.go:384] host is not running, skipping remaining checks
	I1208 01:16:15.762981  930198 status.go:176] multinode-263003-m03 status: &{Name:multinode-263003-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-263003 node start m03 -v=5 --alsologtostderr: (7.700835274s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-263003
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-263003
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-263003: (25.044094888s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-263003 --wait=true -v=5 --alsologtostderr
E1208 01:17:29.408454  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-263003 --wait=true -v=5 --alsologtostderr: (51.398599558s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-263003
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-263003 node delete m03: (5.061094787s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
E1208 01:17:46.335898  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-263003 stop: (24.008452452s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-263003 status: exit status 7 (96.242324ms)

                                                
                                                
-- stdout --
	multinode-263003
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-263003-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr: exit status 7 (92.698997ms)

                                                
                                                
-- stdout --
	multinode-263003
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-263003-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:18:10.784174  938050 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:18:10.784309  938050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:18:10.784322  938050 out.go:374] Setting ErrFile to fd 2...
	I1208 01:18:10.784328  938050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:18:10.784593  938050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:18:10.784767  938050 out.go:368] Setting JSON to false
	I1208 01:18:10.784812  938050 mustload.go:66] Loading cluster: multinode-263003
	I1208 01:18:10.784887  938050 notify.go:221] Checking for updates...
	I1208 01:18:10.785794  938050 config.go:182] Loaded profile config "multinode-263003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:18:10.785819  938050 status.go:174] checking status of multinode-263003 ...
	I1208 01:18:10.786357  938050 cli_runner.go:164] Run: docker container inspect multinode-263003 --format={{.State.Status}}
	I1208 01:18:10.805492  938050 status.go:371] multinode-263003 host status = "Stopped" (err=<nil>)
	I1208 01:18:10.805517  938050 status.go:384] host is not running, skipping remaining checks
	I1208 01:18:10.805524  938050 status.go:176] multinode-263003 status: &{Name:multinode-263003 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:18:10.805560  938050 status.go:174] checking status of multinode-263003-m02 ...
	I1208 01:18:10.805878  938050 cli_runner.go:164] Run: docker container inspect multinode-263003-m02 --format={{.State.Status}}
	I1208 01:18:10.824723  938050 status.go:371] multinode-263003-m02 host status = "Stopped" (err=<nil>)
	I1208 01:18:10.824747  938050 status.go:384] host is not running, skipping remaining checks
	I1208 01:18:10.824754  938050 status.go:176] multinode-263003-m02 status: &{Name:multinode-263003-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-263003 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-263003 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.195365846s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-263003 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-263003
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-263003-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-263003-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.468123ms)

                                                
                                                
-- stdout --
	* [multinode-263003-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-263003-m02' is duplicated with machine name 'multinode-263003-m02' in profile 'multinode-263003'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-263003-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-263003-m03 --driver=docker  --container-runtime=crio: (32.300130352s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-263003
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-263003: exit status 80 (352.983835ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-263003 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-263003-m03 already exists in multinode-263003-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-263003-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-263003-m03: (2.108472068s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                    
x
+
TestScheduledStopUnix (106.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-380223 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-380223 --memory=3072 --driver=docker  --container-runtime=crio: (29.943557464s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380223 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:20:11.970977  946519 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:20:11.971140  946519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:11.971170  946519 out.go:374] Setting ErrFile to fd 2...
	I1208 01:20:11.971189  946519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:11.971451  946519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:20:11.971719  946519 out.go:368] Setting JSON to false
	I1208 01:20:11.971872  946519 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:11.972257  946519 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:20:11.972364  946519 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/config.json ...
	I1208 01:20:11.972578  946519 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:11.972733  946519 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-380223 -n scheduled-stop-380223
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380223 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:20:12.448827  946608 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:20:12.449001  946608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:12.449023  946608 out.go:374] Setting ErrFile to fd 2...
	I1208 01:20:12.449042  946608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:12.449402  946608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:20:12.449715  946608 out.go:368] Setting JSON to false
	I1208 01:20:12.449912  946608 daemonize_unix.go:73] killing process 946538 as it is an old scheduled stop
	I1208 01:20:12.449982  946608 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:12.450628  946608 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:20:12.450954  946608 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/config.json ...
	I1208 01:20:12.451146  946608 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:12.451292  946608 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1208 01:20:12.465443  791807 retry.go:31] will retry after 143.155µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.466148  791807 retry.go:31] will retry after 123.446µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.467265  791807 retry.go:31] will retry after 148.278µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.468335  791807 retry.go:31] will retry after 262.582µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.469451  791807 retry.go:31] will retry after 554.019µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.470593  791807 retry.go:31] will retry after 891.152µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.471725  791807 retry.go:31] will retry after 970.408µs: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.472839  791807 retry.go:31] will retry after 1.854572ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.475755  791807 retry.go:31] will retry after 3.246196ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.479937  791807 retry.go:31] will retry after 5.286281ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.486174  791807 retry.go:31] will retry after 5.80543ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.492405  791807 retry.go:31] will retry after 7.027089ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.499581  791807 retry.go:31] will retry after 12.970967ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.514962  791807 retry.go:31] will retry after 12.873851ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.528216  791807 retry.go:31] will retry after 35.844935ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
I1208 01:20:12.564472  791807 retry.go:31] will retry after 44.281446ms: open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380223 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1208 01:20:34.380818  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380223 -n scheduled-stop-380223
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-380223
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-380223 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:20:38.422770  946973 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:20:38.422931  946973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:38.422942  946973 out.go:374] Setting ErrFile to fd 2...
	I1208 01:20:38.422948  946973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:38.423316  946973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-789938/.minikube/bin
	I1208 01:20:38.423751  946973 out.go:368] Setting JSON to false
	I1208 01:20:38.424068  946973 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:38.424538  946973 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 01:20:38.424654  946973 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/scheduled-stop-380223/config.json ...
	I1208 01:20:38.424895  946973 mustload.go:66] Loading cluster: scheduled-stop-380223
	I1208 01:20:38.425051  946973 config.go:182] Loaded profile config "scheduled-stop-380223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1208 01:20:45.330261  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-380223
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-380223: exit status 7 (70.403401ms)

                                                
                                                
-- stdout --
	scheduled-stop-380223
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380223 -n scheduled-stop-380223
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-380223 -n scheduled-stop-380223: exit status 7 (71.467533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-380223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-380223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-380223: (5.071397267s)
--- PASS: TestScheduledStopUnix (106.68s)

                                                
                                    
x
+
TestInsufficientStorage (12.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-651972 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-651972 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.16173123s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18f24cf5-64df-4e23-9a94-51d5e58bc1b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-651972] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17fba479-97dc-4448-ae6d-c40f6a37a529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"18ee6de9-e622-4a3c-93c2-06d79d5beddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e061c808-eaa5-4167-9262-03517e71d884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig"}}
	{"specversion":"1.0","id":"c63884c5-0a22-4dea-82d4-7856894b7681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube"}}
	{"specversion":"1.0","id":"0f2bade9-6ab0-4eca-a56b-71a5f86c5dc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"639e91c4-6ad2-4bb5-86bf-c0b97a5c4df5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d7421803-c373-4de7-b48e-2b6ccdc899fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a9355916-3eba-4d81-b475-e3358c2ed740","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bb87d2de-619d-4448-9117-c2742079b87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f4783b1-157b-4e82-b958-ff8a1ecf08ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0a920ac8-25b1-4883-8ea3-608f70f0dfd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-651972\" primary control-plane node in \"insufficient-storage-651972\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"98c3acb5-babc-4527-9bba-b578f2264c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ca56a97-899a-4131-b03c-3896039adea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb2bd624-5088-438a-8090-c1d39cfd2d32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-651972 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-651972 --output=json --layout=cluster: exit status 7 (315.442095ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-651972","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-651972","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:21:39.110952  948682 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-651972" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-651972 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-651972 --output=json --layout=cluster: exit status 7 (305.801514ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-651972","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-651972","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:21:39.420020  948746 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-651972" does not appear in /home/jenkins/minikube-integration/22054-789938/kubeconfig
	E1208 01:21:39.430043  948746 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/insufficient-storage-651972/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-651972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-651972
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-651972: (1.989583065s)
--- PASS: TestInsufficientStorage (12.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (303.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3772065704 start -p running-upgrade-457612 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3772065704 start -p running-upgrade-457612 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.571833599s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-457612 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1208 01:30:34.379399  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:30:45.329540  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:32:46.336324  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-457612 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.253681904s)
helpers_test.go:175: Cleaning up "running-upgrade-457612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-457612
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-457612: (1.947608934s)
--- PASS: TestRunningBinaryUpgrade (303.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1907487895 start -p missing-upgrade-156445 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1907487895 start -p missing-upgrade-156445 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.78835447s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-156445
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-156445
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-156445 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-156445 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.064716324s)
helpers_test.go:175: Cleaning up "missing-upgrade-156445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-156445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-156445: (3.043676948s)
--- PASS: TestMissingContainerUpgrade (121.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.17239ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-526754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-789938/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-789938/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526754 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.363563695s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-526754 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1208 01:22:46.336409  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.91001385s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-526754 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-526754 status -o json: exit status 2 (322.350609ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-526754","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-526754
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-526754: (2.065426895s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526754 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.05007093s)
--- PASS: TestNoKubernetes/serial/Start (8.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22054-789938/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-526754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-526754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.604064ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-526754
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-526754: (1.294040221s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526754 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526754 --driver=docker  --container-runtime=crio: (7.723740976s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-526754 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-526754 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.857263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1666912392 start -p stopped-upgrade-971260 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1666912392 start -p stopped-upgrade-971260 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.216289227s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1666912392 -p stopped-upgrade-971260 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1666912392 -p stopped-upgrade-971260 stop: (1.874918174s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-971260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1208 01:25:34.380291  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:25:45.329245  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:26:57.458685  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:27:46.335913  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:28:48.396310  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-971260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.350565114s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-971260
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-971260: (1.842737609s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.84s)

                                                
                                    
x
+
TestPause/serial/Start (81.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-814452 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1208 01:34:09.410982  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-814452 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.070796149s)
--- PASS: TestPause/serial/Start (81.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-814452 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1208 01:35:34.379756  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-814452 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.492807097s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.495287469s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-661561 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d38f2f89-9cb5-463f-96c6-e17dab365206] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d38f2f89-9cb5-463f-96c6-e17dab365206] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003064233s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-661561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-661561 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-661561 --alsologtostderr -v=3: (12.031682465s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561: exit status 7 (76.680899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-661561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-661561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.247184002s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-661561 -n old-k8s-version-661561
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dxkn2" [def4e9ad-e2af-40d8-8910-1ba40eff5ffd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003640828s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-dxkn2" [def4e9ad-e2af-40d8-8910-1ba40eff5ffd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004331739s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-661561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-661561 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 01:40:34.379385  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:40:45.329769  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (1m25.268611429s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-172173 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1a4b14cf-3f65-47a3-9443-2564682f3dae] Pending
helpers_test.go:352: "busybox" [1a4b14cf-3f65-47a3-9443-2564682f3dae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1a4b14cf-3f65-47a3-9443-2564682f3dae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006486675s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-172173 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-172173 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-172173 --alsologtostderr -v=3: (11.999429284s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173: exit status 7 (68.875473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-172173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 01:42:46.336433  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-172173 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (50.491142517s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-172173 -n embed-certs-172173
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jsh6" [bf65e0a8-7e16-4a0d-a847-144a1ac9488b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003734209s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jsh6" [bf65e0a8-7e16-4a0d-a847-144a1ac9488b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003692447s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-172173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-172173 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 01:43:37.460020  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:51.933723  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:51.940093  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:51.951454  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:51.972803  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:52.014153  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:52.095511  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:52.257100  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:52.579344  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:53.220615  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:54.502228  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:43:57.064024  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:02.186092  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:12.428318  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:32.910090  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (1m19.360984486s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [82b0cf89-8ccd-4661-9916-328846a942d2] Pending
helpers_test.go:352: "busybox" [82b0cf89-8ccd-4661-9916-328846a942d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [82b0cf89-8ccd-4661-9916-328846a942d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003890744s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-993283 --alsologtostderr -v=3
E1208 01:45:13.872310  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/old-k8s-version-661561/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-993283 --alsologtostderr -v=3: (12.029038694s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283: exit status 7 (79.251662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-993283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 01:45:28.398327  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:45:34.380193  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/addons-429840/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:45:45.329932  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-525396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-993283 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (51.764301277s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-993283 -n default-k8s-diff-port-993283
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jd27p" [7a30d9ec-71bb-42f5-af3b-e6a942ad3064] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002642851s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jd27p" [7a30d9ec-71bb-42f5-af3b-e6a942ad3064] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003802305s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-993283 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-993283 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-389831 --alsologtostderr -v=3
E1208 01:50:49.412864  791807 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-789938/.minikube/profiles/functional-714395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-389831 --alsologtostderr -v=3: (1.379933752s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-389831 -n no-preload-389831: exit status 7 (75.791099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-389831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-448023 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-448023 --alsologtostderr -v=3: (1.36049911s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-448023 -n newest-cni-448023: exit status 7 (73.481505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-448023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-448023 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (38/364)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
350 TestPreload 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-748036 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-748036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-748036
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:35: skipping TestPreload - user-pulled images not persisted across restarts with crio
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-503313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-503313
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard